00:00:00.001 Started by upstream project "autotest-per-patch" build number 126187 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23939 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.066 Fetching changes from the remote Git repository 00:00:00.067 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.110 Using shallow fetch with depth 1 00:00:00.110 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.110 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/75/21875/23 # timeout=5 00:00:04.070 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.083 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.096 Checking out Revision 642aedf8bba2e584685fe6e0b1310032564b5451 (FETCH_HEAD) 00:00:04.096 > git config core.sparsecheckout # timeout=10 00:00:04.108 > git read-tree -mu HEAD # timeout=10 00:00:04.127 > git checkout -f 642aedf8bba2e584685fe6e0b1310032564b5451 # timeout=5 00:00:04.155 Commit message: "jenkins/jjb-config: Remove SPDK_TEST_RELEASE_BUILD from packaging job" 00:00:04.155 > git rev-list --no-walk 5fe533b64b2bcae2206a8f61fddcc62257280cde # timeout=10 00:00:04.253 [Pipeline] Start of Pipeline 00:00:04.264 [Pipeline] library 00:00:04.265 Loading library shm_lib@master 00:00:04.265 Library shm_lib@master is cached. Copying from home. 00:00:04.282 [Pipeline] node 00:00:04.295 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.297 [Pipeline] { 00:00:04.313 [Pipeline] catchError 00:00:04.315 [Pipeline] { 00:00:04.331 [Pipeline] wrap 00:00:04.340 [Pipeline] { 00:00:04.346 [Pipeline] stage 00:00:04.347 [Pipeline] { (Prologue) 00:00:04.514 [Pipeline] sh 00:00:04.796 + logger -p user.info -t JENKINS-CI 00:00:04.814 [Pipeline] echo 00:00:04.816 Node: CYP11 00:00:04.822 [Pipeline] sh 00:00:05.166 [Pipeline] setCustomBuildProperty 00:00:05.179 [Pipeline] echo 00:00:05.180 Cleanup processes 00:00:05.184 [Pipeline] sh 00:00:05.469 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.469 1012924 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.480 [Pipeline] sh 00:00:05.758 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.758 ++ grep -v 'sudo pgrep' 00:00:05.758 ++ awk '{print $1}' 00:00:05.758 + sudo kill -9 00:00:05.758 + true 00:00:05.769 [Pipeline] cleanWs 00:00:05.777 [WS-CLEANUP] Deleting project workspace... 00:00:05.777 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.783 [WS-CLEANUP] done 00:00:05.787 [Pipeline] setCustomBuildProperty 00:00:05.797 [Pipeline] sh 00:00:06.074 + sudo git config --global --replace-all safe.directory '*' 00:00:06.155 [Pipeline] httpRequest 00:00:06.188 [Pipeline] echo 00:00:06.189 Sorcerer 10.211.164.101 is alive 00:00:06.196 [Pipeline] httpRequest 00:00:06.200 HttpMethod: GET 00:00:06.200 URL: http://10.211.164.101/packages/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:06.201 Sending request to url: http://10.211.164.101/packages/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:06.210 Response Code: HTTP/1.1 200 OK 00:00:06.210 Success: Status code 200 is in the accepted range: 200,404 00:00:06.211 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:08.540 [Pipeline] sh 00:00:08.856 + tar --no-same-owner -xf jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:08.875 [Pipeline] httpRequest 00:00:08.899 [Pipeline] echo 00:00:08.900 Sorcerer 10.211.164.101 is alive 00:00:08.909 [Pipeline] httpRequest 00:00:08.914 HttpMethod: GET 00:00:08.915 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:08.915 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:08.934 Response Code: HTTP/1.1 200 OK 00:00:08.935 Success: Status code 200 is in the accepted range: 200,404 00:00:08.935 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:24.222 [Pipeline] sh 00:01:24.508 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:27.813 [Pipeline] sh 00:01:28.097 + git -C spdk log --oneline -n5 00:01:28.097 2728651ee accel: adjust task per ch define name 00:01:28.097 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:28.097 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:28.097 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:28.097 719d03c6a sock/uring: only register net impl if supported 00:01:28.112 [Pipeline] } 00:01:28.130 [Pipeline] // stage 00:01:28.140 [Pipeline] stage 00:01:28.142 [Pipeline] { (Prepare) 00:01:28.162 [Pipeline] writeFile 00:01:28.181 [Pipeline] sh 00:01:28.466 + logger -p user.info -t JENKINS-CI 00:01:28.480 [Pipeline] sh 00:01:28.763 + logger -p user.info -t JENKINS-CI 00:01:28.776 [Pipeline] sh 00:01:29.090 + cat autorun-spdk.conf 00:01:29.090 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.090 SPDK_TEST_NVMF=1 00:01:29.090 SPDK_TEST_NVME_CLI=1 00:01:29.090 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.090 SPDK_TEST_NVMF_NICS=e810 00:01:29.090 SPDK_TEST_VFIOUSER=1 00:01:29.090 SPDK_RUN_UBSAN=1 00:01:29.090 NET_TYPE=phy 00:01:29.110 RUN_NIGHTLY=0 00:01:29.115 [Pipeline] readFile 00:01:29.163 [Pipeline] withEnv 00:01:29.164 [Pipeline] { 00:01:29.175 [Pipeline] sh 00:01:29.462 + set -ex 00:01:29.462 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:29.462 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.462 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.462 ++ SPDK_TEST_NVMF=1 00:01:29.462 ++ SPDK_TEST_NVME_CLI=1 00:01:29.462 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.462 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.462 ++ SPDK_TEST_VFIOUSER=1 00:01:29.462 ++ SPDK_RUN_UBSAN=1 00:01:29.462 ++ NET_TYPE=phy 00:01:29.462 ++ RUN_NIGHTLY=0 00:01:29.462 + case $SPDK_TEST_NVMF_NICS in 00:01:29.462 + DRIVERS=ice 00:01:29.462 + [[ tcp == \r\d\m\a ]] 00:01:29.462 + [[ -n ice ]] 00:01:29.462 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:29.462 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:29.462 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:29.462 rmmod: ERROR: Module irdma is not currently loaded 00:01:29.462 rmmod: ERROR: Module i40iw is not currently loaded 00:01:29.462 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:29.462 + true 00:01:29.462 + for D in $DRIVERS 00:01:29.462 + sudo modprobe ice 00:01:29.462 + exit 0 00:01:29.472 [Pipeline] } 00:01:29.491 [Pipeline] // withEnv 00:01:29.497 [Pipeline] } 00:01:29.515 [Pipeline] // stage 00:01:29.526 [Pipeline] catchError 00:01:29.528 [Pipeline] { 00:01:29.543 [Pipeline] timeout 00:01:29.543 Timeout set to expire in 50 min 00:01:29.545 [Pipeline] { 00:01:29.561 [Pipeline] stage 00:01:29.563 [Pipeline] { (Tests) 00:01:29.578 [Pipeline] sh 00:01:29.862 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.862 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.862 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.862 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:29.862 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.862 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:29.862 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:29.862 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:29.862 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:29.862 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:29.862 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:29.862 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.862 + source /etc/os-release 00:01:29.862 ++ NAME='Fedora Linux' 00:01:29.862 ++ VERSION='38 (Cloud Edition)' 00:01:29.862 ++ ID=fedora 00:01:29.862 ++ VERSION_ID=38 00:01:29.862 ++ VERSION_CODENAME= 00:01:29.862 ++ PLATFORM_ID=platform:f38 00:01:29.862 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:29.862 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.862 ++ LOGO=fedora-logo-icon 00:01:29.862 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:29.862 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.862 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:29.862 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.862 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.862 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.862 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:29.863 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.863 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:29.863 ++ SUPPORT_END=2024-05-14 00:01:29.863 ++ VARIANT='Cloud Edition' 00:01:29.863 ++ VARIANT_ID=cloud 00:01:29.863 + uname -a 00:01:29.863 Linux spdk-cyp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:29.863 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:33.159 Hugepages 00:01:33.159 node hugesize free / total 00:01:33.159 node0 1048576kB 0 / 0 00:01:33.159 node0 2048kB 0 / 0 00:01:33.159 node1 1048576kB 0 / 0 00:01:33.159 node1 2048kB 0 / 0 00:01:33.159 00:01:33.159 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.159 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:33.159 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:33.159 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:33.159 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:33.159 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:33.159 + rm -f /tmp/spdk-ld-path 00:01:33.159 + source autorun-spdk.conf 00:01:33.159 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.159 ++ SPDK_TEST_NVMF=1 00:01:33.159 ++ SPDK_TEST_NVME_CLI=1 00:01:33.159 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.159 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.159 ++ SPDK_TEST_VFIOUSER=1 00:01:33.159 ++ SPDK_RUN_UBSAN=1 00:01:33.159 ++ NET_TYPE=phy 00:01:33.159 ++ RUN_NIGHTLY=0 00:01:33.159 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.159 + [[ -n '' ]] 00:01:33.159 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.419 + for M in /var/spdk/build-*-manifest.txt 00:01:33.419 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.419 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.419 + for M in /var/spdk/build-*-manifest.txt 00:01:33.419 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.419 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:33.419 ++ uname 00:01:33.420 + [[ Linux == \L\i\n\u\x ]] 00:01:33.420 + sudo dmesg -T 00:01:33.420 + sudo dmesg --clear 00:01:33.420 + dmesg_pid=1014017 00:01:33.420 + [[ Fedora Linux == FreeBSD ]] 00:01:33.420 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.420 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.420 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.420 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.420 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.420 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.420 + sudo dmesg -Tw 00:01:33.420 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.420 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.420 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.420 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.420 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.420 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.420 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.420 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.420 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.420 Test configuration: 00:01:33.420 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.420 SPDK_TEST_NVMF=1 00:01:33.420 SPDK_TEST_NVME_CLI=1 00:01:33.420 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.420 SPDK_TEST_NVMF_NICS=e810 00:01:33.420 SPDK_TEST_VFIOUSER=1 00:01:33.420 SPDK_RUN_UBSAN=1 00:01:33.420 NET_TYPE=phy 00:01:33.420 RUN_NIGHTLY=0 13:47:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:33.420 13:47:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.420 13:47:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.420 13:47:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.420 13:47:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.420 13:47:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.420 13:47:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.420 13:47:31 -- paths/export.sh@5 -- $ export PATH 00:01:33.420 13:47:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.420 13:47:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:33.420 13:47:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:33.420 13:47:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721044051.XXXXXX 00:01:33.420 13:47:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721044051.hEZf55 00:01:33.420 13:47:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:33.420 13:47:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:33.420 13:47:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:33.420 13:47:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:33.420 13:47:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.420 13:47:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:33.420 13:47:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:33.420 13:47:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.420 13:47:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:33.420 13:47:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:33.420 13:47:31 -- pm/common@17 -- $ local monitor 00:01:33.420 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.420 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.420 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.420 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.420 13:47:31 -- pm/common@21 -- $ date +%s 00:01:33.420 13:47:31 -- pm/common@25 -- $ sleep 1 00:01:33.420 13:47:31 -- pm/common@21 -- $ date +%s 00:01:33.420 13:47:31 -- pm/common@21 -- $ date +%s 00:01:33.420 13:47:31 -- pm/common@21 -- $ date +%s 00:01:33.420 13:47:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721044051 00:01:33.420 13:47:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721044051 00:01:33.420 13:47:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721044051 00:01:33.420 13:47:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721044051 00:01:33.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721044051_collect-vmstat.pm.log 00:01:33.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721044051_collect-cpu-load.pm.log 00:01:33.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721044051_collect-cpu-temp.pm.log 00:01:33.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721044051_collect-bmc-pm.bmc.pm.log 00:01:34.616 13:47:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:34.616 13:47:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.616 13:47:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.616 13:47:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.616 13:47:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.616 Mon Jul 15 11:47:32 AM UTC 2024 00:01:34.616 13:47:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.616 v24.09-pre-206-g2728651ee 00:01:34.616 13:47:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.616 13:47:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.616 13:47:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.616 13:47:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:34.616 13:47:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:34.616 13:47:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.616 ************************************ 00:01:34.616 START TEST ubsan 00:01:34.616 ************************************ 00:01:34.616 13:47:32 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:34.616 using ubsan 00:01:34.616 00:01:34.616 real 0m0.000s 00:01:34.616 user 0m0.000s 00:01:34.616 sys 0m0.000s 00:01:34.616 13:47:32 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:34.616 13:47:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.616 ************************************ 00:01:34.616 END TEST ubsan 00:01:34.616 ************************************ 00:01:34.616 13:47:32 -- common/autotest_common.sh@1142 -- $ return 0 00:01:34.616 13:47:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.616 13:47:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.616 13:47:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:34.616 13:47:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:34.876 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:34.876 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.136 Using 'verbs' RDMA provider 00:01:50.974 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:03.187 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:03.187 Creating mk/config.mk...done. 00:02:03.187 Creating mk/cc.flags.mk...done. 00:02:03.187 Type 'make' to build. 00:02:03.187 13:48:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:03.187 13:48:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:03.187 13:48:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.187 13:48:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.187 ************************************ 00:02:03.188 START TEST make 00:02:03.188 ************************************ 00:02:03.188 13:48:00 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:03.188 make[1]: Nothing to be done for 'all'. 00:02:04.126 The Meson build system 00:02:04.126 Version: 1.3.1 00:02:04.126 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:04.126 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.126 Build type: native build 00:02:04.126 Project name: libvfio-user 00:02:04.126 Project version: 0.0.1 00:02:04.126 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:04.126 C linker for the host machine: cc ld.bfd 2.39-16 00:02:04.126 Host machine cpu family: x86_64 00:02:04.126 Host machine cpu: x86_64 00:02:04.126 Run-time dependency threads found: YES 00:02:04.126 Library dl found: YES 00:02:04.126 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:04.126 Run-time dependency json-c found: YES 0.17 00:02:04.126 Run-time dependency cmocka found: YES 1.1.7 00:02:04.126 Program pytest-3 found: NO 00:02:04.126 Program flake8 found: NO 00:02:04.126 Program misspell-fixer found: NO 00:02:04.126 Program restructuredtext-lint found: NO 00:02:04.126 Program valgrind found: YES (/usr/bin/valgrind) 00:02:04.126 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.126 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.126 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.126 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.126 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:04.126 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:04.126 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.127 Build targets in project: 8 00:02:04.127 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:04.127 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:04.127 00:02:04.127 libvfio-user 0.0.1 00:02:04.127 00:02:04.127 User defined options 00:02:04.127 buildtype : debug 00:02:04.127 default_library: shared 00:02:04.127 libdir : /usr/local/lib 00:02:04.127 00:02:04.127 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.399 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:04.399 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:04.399 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:04.399 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:04.399 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:04.399 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:04.399 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:04.399 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:04.399 [8/37] Compiling C object samples/null.p/null.c.o 00:02:04.399 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:04.399 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:04.399 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:04.399 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:04.399 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:04.399 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:04.399 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:04.399 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:04.399 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:04.400 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:04.400 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:04.400 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:04.400 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:04.400 [22/37] Compiling C object samples/server.p/server.c.o 00:02:04.400 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:04.400 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:04.400 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:04.658 [26/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:04.658 [27/37] Compiling C object samples/client.p/client.c.o 00:02:04.658 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:04.658 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:04.658 [30/37] Linking target test/unit_tests 00:02:04.658 [31/37] Linking target samples/client 00:02:04.658 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:04.658 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:04.658 [34/37] Linking target samples/lspci 00:02:04.658 [35/37] Linking target samples/server 00:02:04.658 [36/37] Linking target samples/gpio-pci-idio-16 00:02:04.658 [37/37] Linking target samples/null 00:02:04.658 INFO: autodetecting backend as ninja 00:02:04.658 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.917 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.178 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.178 ninja: no work to do. 00:02:11.759 The Meson build system 00:02:11.759 Version: 1.3.1 00:02:11.759 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:11.759 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:11.759 Build type: native build 00:02:11.759 Program cat found: YES (/usr/bin/cat) 00:02:11.759 Project name: DPDK 00:02:11.759 Project version: 24.03.0 00:02:11.759 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:11.759 C linker for the host machine: cc ld.bfd 2.39-16 00:02:11.759 Host machine cpu family: x86_64 00:02:11.759 Host machine cpu: x86_64 00:02:11.759 Message: ## Building in Developer Mode ## 00:02:11.759 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.759 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.759 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.759 Program python3 found: YES (/usr/bin/python3) 00:02:11.759 Program cat found: YES (/usr/bin/cat) 00:02:11.759 Compiler for C supports arguments -march=native: YES 00:02:11.759 Checking for size of "void *" : 8 00:02:11.759 Checking for size of "void *" : 8 (cached) 00:02:11.759 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:11.759 Library m found: YES 00:02:11.759 Library numa found: YES 00:02:11.759 Has header "numaif.h" : YES 00:02:11.759 Library fdt found: NO 00:02:11.759 Library execinfo found: NO 00:02:11.759 Has header "execinfo.h" : YES 00:02:11.759 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:11.759 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.759 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.759 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.759 Run-time dependency openssl found: YES 3.0.9 00:02:11.759 Run-time dependency libpcap found: YES 1.10.4 00:02:11.759 Has header "pcap.h" with dependency libpcap: YES 00:02:11.759 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.759 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.759 Compiler for C supports arguments -Wformat: YES 00:02:11.759 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.759 Compiler for C supports arguments -Wformat-security: NO 00:02:11.759 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.759 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.759 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.759 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.759 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.759 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.759 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.759 Compiler for C supports arguments -Wundef: YES 00:02:11.759 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.759 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.759 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.759 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.759 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.759 Program objdump found: YES (/usr/bin/objdump) 00:02:11.759 Compiler for C supports arguments -mavx512f: YES 00:02:11.759 Checking if "AVX512 checking" compiles: YES 00:02:11.759 Fetching value of define "__SSE4_2__" : 1 00:02:11.759 Fetching value of define "__AES__" : 1 00:02:11.759 Fetching value of define "__AVX__" : 1 00:02:11.759 Fetching value of define "__AVX2__" : 1 00:02:11.759 Fetching value of define "__AVX512BW__" : 1 00:02:11.759 Fetching value of define "__AVX512CD__" : 1 00:02:11.759 Fetching value of define "__AVX512DQ__" : 1 00:02:11.759 Fetching value of define "__AVX512F__" : 1 00:02:11.759 Fetching value of define "__AVX512VL__" : 1 00:02:11.759 Fetching value of define "__PCLMUL__" : 1 00:02:11.759 Fetching value of define "__RDRND__" : 1 00:02:11.759 Fetching value of define "__RDSEED__" : 1 00:02:11.759 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:11.759 Fetching value of define "__znver1__" : (undefined) 00:02:11.759 Fetching value of define "__znver2__" : (undefined) 00:02:11.759 Fetching value of define "__znver3__" : (undefined) 00:02:11.759 Fetching value of define "__znver4__" : (undefined) 00:02:11.759 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.759 Message: lib/log: Defining dependency "log" 00:02:11.759 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.759 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.759 Checking for function "getentropy" : NO 00:02:11.759 Message: lib/eal: Defining dependency "eal" 00:02:11.759 Message: lib/ring: Defining dependency "ring" 00:02:11.759 Message: lib/rcu: Defining dependency "rcu" 00:02:11.759 Message: lib/mempool: Defining dependency "mempool" 00:02:11.759 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.759 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.759 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.759 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.759 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.759 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.759 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:11.759 Compiler for C supports arguments -mpclmul: YES 00:02:11.759 Compiler for C supports arguments -maes: YES 00:02:11.759 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.759 Compiler for C supports arguments -mavx512bw: YES 00:02:11.759 Compiler for C supports arguments -mavx512dq: YES 00:02:11.759 Compiler for C supports arguments -mavx512vl: YES 00:02:11.759 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.759 Compiler for C supports arguments -mavx2: YES 00:02:11.759 Compiler for C supports arguments -mavx: YES 00:02:11.759 Message: lib/net: Defining dependency "net" 00:02:11.759 Message: lib/meter: Defining dependency "meter" 00:02:11.759 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.759 Message: lib/pci: Defining dependency "pci" 00:02:11.759 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.759 Message: lib/hash: Defining dependency "hash" 00:02:11.759 Message: lib/timer: Defining dependency "timer" 00:02:11.759 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.759 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.759 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.759 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.759 Message: lib/power: Defining dependency "power" 00:02:11.759 Message: lib/reorder: Defining dependency "reorder" 00:02:11.759 Message: lib/security: Defining dependency "security" 00:02:11.759 Has header "linux/userfaultfd.h" : YES 00:02:11.759 Has header "linux/vduse.h" : YES 00:02:11.759 Message: lib/vhost: Defining dependency "vhost" 00:02:11.759 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.759 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.759 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.759 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.759 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.759 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.759 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.759 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.759 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.759 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.759 Program doxygen found: YES (/usr/bin/doxygen) 00:02:11.759 Configuring doxy-api-html.conf using configuration 00:02:11.759 Configuring doxy-api-man.conf using configuration 00:02:11.759 Program mandb found: YES (/usr/bin/mandb) 00:02:11.759 Program sphinx-build found: NO 00:02:11.759 Configuring rte_build_config.h using configuration 00:02:11.759 Message: 00:02:11.759 ================= 00:02:11.759 Applications Enabled 00:02:11.759 ================= 00:02:11.759 00:02:11.759 apps: 00:02:11.759 00:02:11.759 00:02:11.759 Message: 00:02:11.759 ================= 00:02:11.759 Libraries Enabled 00:02:11.759 ================= 00:02:11.759 00:02:11.759 libs: 00:02:11.759 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.759 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.759 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.759 00:02:11.759 Message: 00:02:11.759 =============== 00:02:11.759 Drivers Enabled 00:02:11.759 =============== 00:02:11.759 00:02:11.759 common: 00:02:11.759 00:02:11.759 bus: 00:02:11.759 pci, vdev, 00:02:11.759 mempool: 00:02:11.759 ring, 00:02:11.759 dma: 00:02:11.759 00:02:11.759 net: 00:02:11.759 00:02:11.759 crypto: 00:02:11.759 00:02:11.759 compress: 00:02:11.759 00:02:11.759 vdpa: 00:02:11.759 00:02:11.759 00:02:11.759 Message: 00:02:11.759 ================= 00:02:11.759 Content Skipped 00:02:11.759 ================= 00:02:11.759 00:02:11.759 apps: 00:02:11.759 dumpcap: explicitly disabled via build config 00:02:11.759 graph: explicitly disabled via build config 00:02:11.759 pdump: explicitly disabled via build config 00:02:11.759 proc-info: explicitly disabled via build config 00:02:11.759 test-acl: explicitly disabled via build config 00:02:11.759 test-bbdev: explicitly disabled via build config 00:02:11.759 test-cmdline: explicitly disabled via build config 00:02:11.759 test-compress-perf: explicitly disabled via build config 00:02:11.759 test-crypto-perf: explicitly disabled via build config 00:02:11.759 test-dma-perf: explicitly disabled via build config 00:02:11.759 test-eventdev: explicitly disabled via build config 00:02:11.760 test-fib: explicitly disabled via build config 00:02:11.760 test-flow-perf: explicitly disabled via build config 00:02:11.760 test-gpudev: explicitly disabled via build config 00:02:11.760 test-mldev: explicitly disabled via build config 00:02:11.760 test-pipeline: explicitly disabled via build config 00:02:11.760 test-pmd: explicitly disabled via build config 00:02:11.760 test-regex: explicitly disabled via build config 00:02:11.760 test-sad: explicitly disabled via build config 00:02:11.760 test-security-perf: explicitly disabled via build config 00:02:11.760 00:02:11.760 libs: 00:02:11.760 argparse: explicitly disabled via build config 00:02:11.760 metrics: explicitly disabled via build config 00:02:11.760 acl: explicitly disabled via build config 00:02:11.760 bbdev: explicitly disabled via build config 00:02:11.760 bitratestats: explicitly disabled via build config 00:02:11.760 bpf: explicitly disabled via build config 00:02:11.760 cfgfile: explicitly disabled via build config 00:02:11.760 distributor: explicitly disabled via build config 00:02:11.760 efd: explicitly disabled via build config 00:02:11.760 eventdev: explicitly disabled via build config 00:02:11.760 dispatcher: explicitly disabled via build config 00:02:11.760 gpudev: explicitly disabled via build config 00:02:11.760 gro: explicitly disabled via build config 00:02:11.760 gso: explicitly disabled via build config 00:02:11.760 ip_frag: explicitly disabled via build config 00:02:11.760 jobstats: explicitly disabled via build config 00:02:11.760 latencystats: explicitly disabled via build config 00:02:11.760 lpm: explicitly disabled via build config 00:02:11.760 member: explicitly disabled via build config 00:02:11.760 pcapng: explicitly disabled via build config 00:02:11.760 rawdev: explicitly disabled via build config 00:02:11.760 regexdev: explicitly disabled via build config 00:02:11.760 mldev: explicitly disabled via build config 00:02:11.760 rib: explicitly disabled via build config 00:02:11.760 sched: explicitly disabled via build config 00:02:11.760 stack: explicitly disabled via build config 00:02:11.760 ipsec: explicitly disabled via build config 00:02:11.760 pdcp: explicitly disabled via build config 00:02:11.760 fib: explicitly disabled via build config 00:02:11.760 port: explicitly disabled via build config 00:02:11.760 pdump: explicitly disabled via build config 00:02:11.760 table: explicitly disabled via build config 00:02:11.760 pipeline: explicitly disabled via build config 00:02:11.760 graph: explicitly disabled via build config 00:02:11.760 node: explicitly disabled via build config 00:02:11.760 00:02:11.760 drivers: 00:02:11.760 common/cpt: not in enabled drivers build config 00:02:11.760 common/dpaax: not in enabled drivers build config 00:02:11.760 common/iavf: not in enabled drivers build config 00:02:11.760 common/idpf: not in enabled drivers build config 00:02:11.760 common/ionic: not in enabled drivers build config 00:02:11.760 common/mvep: not in enabled drivers build config 00:02:11.760 common/octeontx: not in enabled drivers build config 00:02:11.760 bus/auxiliary: not in enabled drivers build config 00:02:11.760 bus/cdx: not in enabled drivers build config 00:02:11.760 bus/dpaa: not in enabled drivers build config 00:02:11.760 bus/fslmc: not in enabled drivers build config 00:02:11.760 bus/ifpga: not in enabled drivers build config 00:02:11.760 bus/platform: not in enabled drivers build config 00:02:11.760 bus/uacce: not in enabled drivers build config 00:02:11.760 bus/vmbus: not in enabled drivers build config 00:02:11.760 common/cnxk: not in enabled drivers build config 00:02:11.760 common/mlx5: not in enabled drivers build config 00:02:11.760 common/nfp: not in enabled drivers build config 00:02:11.760 common/nitrox: not in enabled drivers build config 00:02:11.760 common/qat: not in enabled drivers build config 00:02:11.760 common/sfc_efx: not in enabled drivers build config 00:02:11.760 mempool/bucket: not in enabled drivers build config 00:02:11.760 mempool/cnxk: not in enabled drivers build config 00:02:11.760 mempool/dpaa: not in enabled drivers build config 00:02:11.760 mempool/dpaa2: not in enabled drivers build config 00:02:11.760 mempool/octeontx: not in enabled drivers build config 00:02:11.760 mempool/stack: not in enabled drivers build config 00:02:11.760 dma/cnxk: not in enabled drivers build config 00:02:11.760 dma/dpaa: not in enabled drivers build config 00:02:11.760 dma/dpaa2: not in enabled drivers build config 00:02:11.760 dma/hisilicon: not in enabled drivers build config 00:02:11.760 dma/idxd: not in enabled drivers build config 00:02:11.760 dma/ioat: not in enabled drivers build config 00:02:11.760 dma/skeleton: not in enabled drivers build config 00:02:11.760 net/af_packet: not in enabled drivers build config 00:02:11.760 net/af_xdp: not in enabled drivers build config 00:02:11.760 net/ark: not in enabled drivers build config 00:02:11.760 net/atlantic: not in enabled drivers build config 00:02:11.760 net/avp: not in enabled drivers build config 00:02:11.760 net/axgbe: not in enabled drivers build config 00:02:11.760 net/bnx2x: not in enabled drivers build config 00:02:11.760 net/bnxt: not in enabled drivers build config 00:02:11.760 net/bonding: not in enabled drivers build config 00:02:11.760 net/cnxk: not in enabled drivers build config 00:02:11.760 net/cpfl: not in enabled drivers build config 00:02:11.760 net/cxgbe: not in enabled drivers build config 00:02:11.760 net/dpaa: not in enabled drivers build config 00:02:11.760 net/dpaa2: not in enabled drivers build config 00:02:11.760 net/e1000: not in enabled drivers build config 00:02:11.760 net/ena: not in enabled drivers build config 00:02:11.760 net/enetc: not in enabled drivers build config 00:02:11.760 net/enetfec: not in enabled drivers build config 00:02:11.760 net/enic: not in enabled drivers build config 00:02:11.760 net/failsafe: not in enabled drivers build config 00:02:11.760 net/fm10k: not in enabled drivers build config 00:02:11.760 net/gve: not in enabled drivers build config 00:02:11.760 net/hinic: not in enabled drivers build config 00:02:11.760 net/hns3: not in enabled drivers build config 00:02:11.760 net/i40e: not in enabled drivers build config 00:02:11.760 net/iavf: not in enabled drivers build config 00:02:11.760 net/ice: not in enabled drivers build config 00:02:11.760 net/idpf: not in enabled drivers build config 00:02:11.760 net/igc: not in enabled drivers build config 00:02:11.760 net/ionic: not in enabled drivers build config 00:02:11.760 net/ipn3ke: not in enabled drivers build config 00:02:11.760 net/ixgbe: not in enabled drivers build config 00:02:11.760 net/mana: not in enabled drivers build config 00:02:11.760 net/memif: not in enabled drivers build config 00:02:11.760 net/mlx4: not in enabled drivers build config 00:02:11.760 net/mlx5: not in enabled drivers build config 00:02:11.760 net/mvneta: not in enabled drivers build config 00:02:11.760 net/mvpp2: not in enabled drivers build config 00:02:11.760 net/netvsc: not in enabled drivers build config 00:02:11.760 net/nfb: not in enabled drivers build config 00:02:11.760 net/nfp: not in enabled drivers build config 00:02:11.760 net/ngbe: not in enabled drivers build config 00:02:11.760 net/null: not in enabled drivers build config 00:02:11.760 net/octeontx: not in enabled drivers build config 00:02:11.760 net/octeon_ep: not in enabled drivers build config 00:02:11.760 net/pcap: not in enabled drivers build config 00:02:11.760 net/pfe: not in enabled drivers build config 00:02:11.760 net/qede: not in enabled drivers build config 00:02:11.760 net/ring: not in enabled drivers build config 00:02:11.760 net/sfc: not in enabled drivers build config 00:02:11.760 net/softnic: not in enabled drivers build config 00:02:11.760 net/tap: not in enabled drivers build config 00:02:11.760 net/thunderx: not in enabled drivers build config 00:02:11.760 net/txgbe: not in enabled drivers build config 00:02:11.760 net/vdev_netvsc: not in enabled drivers build config 00:02:11.760 net/vhost: not in enabled drivers build config 00:02:11.760 net/virtio: not in enabled drivers build config 00:02:11.760 net/vmxnet3: not in enabled drivers build config 00:02:11.760 raw/*: missing internal dependency, "rawdev" 00:02:11.760 crypto/armv8: not in enabled drivers build config 00:02:11.760 crypto/bcmfs: not in enabled drivers build config 00:02:11.760 crypto/caam_jr: not in enabled drivers build config 00:02:11.760 crypto/ccp: not in enabled drivers build config 00:02:11.760 crypto/cnxk: not in enabled drivers build config 00:02:11.760 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.760 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.760 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.760 crypto/mlx5: not in enabled drivers build config 00:02:11.760 crypto/mvsam: not in enabled drivers build config 00:02:11.760 crypto/nitrox: not in enabled drivers build config 00:02:11.760 crypto/null: not in enabled drivers build config 00:02:11.760 crypto/octeontx: not in enabled drivers build config 00:02:11.760 crypto/openssl: not in enabled drivers build config 00:02:11.760 crypto/scheduler: not in enabled drivers build config 00:02:11.760 crypto/uadk: not in enabled drivers build config 00:02:11.760 crypto/virtio: not in enabled drivers build config 00:02:11.760 compress/isal: not in enabled drivers build config 00:02:11.760 compress/mlx5: not in enabled drivers build config 00:02:11.760 compress/nitrox: not in enabled drivers build config 00:02:11.760 compress/octeontx: not in enabled drivers build config 00:02:11.760 compress/zlib: not in enabled drivers build config 00:02:11.760 regex/*: missing internal dependency, "regexdev" 00:02:11.760 ml/*: missing internal dependency, "mldev" 00:02:11.760 vdpa/ifc: not in enabled drivers build config 00:02:11.760 vdpa/mlx5: not in enabled drivers build config 00:02:11.760 vdpa/nfp: not in enabled drivers build config 00:02:11.760 vdpa/sfc: not in enabled drivers build config 00:02:11.760 event/*: missing internal dependency, "eventdev" 00:02:11.760 baseband/*: missing internal dependency, "bbdev" 00:02:11.760 gpu/*: missing internal dependency, "gpudev" 00:02:11.760 00:02:11.760 00:02:11.760 Build targets in project: 84 00:02:11.760 00:02:11.760 DPDK 24.03.0 00:02:11.760 00:02:11.760 User defined options 00:02:11.760 buildtype : debug 00:02:11.760 default_library : shared 00:02:11.760 libdir : lib 00:02:11.760 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:11.760 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.760 c_link_args : 00:02:11.760 cpu_instruction_set: native 00:02:11.760 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.761 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.761 enable_docs : false 00:02:11.761 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:11.761 enable_kmods : false 00:02:11.761 max_lcores : 128 00:02:11.761 tests : false 00:02:11.761 00:02:11.761 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.761 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:11.761 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.761 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.761 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.761 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.761 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.761 [6/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.761 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.761 [8/267] Linking static target lib/librte_kvargs.a 00:02:11.761 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.761 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.761 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.761 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.761 [13/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.761 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.761 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.761 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.761 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.761 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.021 [19/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.021 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.021 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.021 [22/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.021 [23/267] Linking static target lib/librte_log.a 00:02:12.021 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.021 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.021 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.021 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.021 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.021 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.021 [30/267] Linking static target lib/librte_pci.a 00:02:12.021 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.021 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.021 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.021 [34/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.021 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.021 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.021 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.021 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.292 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.292 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.292 [41/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.292 [42/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.292 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.292 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.292 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.292 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.292 [47/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.292 [48/267] Linking static target lib/librte_meter.a 00:02:12.292 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.292 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.292 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.292 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.293 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.293 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.293 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.293 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.293 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.293 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.293 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.293 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.293 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.293 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.293 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.293 [64/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.293 [65/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.293 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.293 [67/267] Linking static target lib/librte_ring.a 00:02:12.293 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.293 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.293 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.293 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.293 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.293 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.293 [74/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.293 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.293 [76/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.293 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.293 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.293 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.293 [80/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.293 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.293 [82/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.293 [83/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.293 [84/267] Linking static target lib/librte_timer.a 00:02:12.293 [85/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.293 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.293 [87/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.293 [88/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.293 [89/267] Linking static target lib/librte_telemetry.a 00:02:12.293 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.293 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.293 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:12.293 [93/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.293 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.293 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.293 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.293 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.293 [98/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.293 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.293 [100/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.293 [101/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.293 [102/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.293 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.293 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.293 [105/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.293 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.293 [107/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.293 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.293 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.293 [110/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.293 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.293 [112/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.293 [113/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.293 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.293 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.293 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.293 [117/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.553 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.553 [119/267] Linking static target lib/librte_cmdline.a 00:02:12.553 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.553 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.553 [122/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.553 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.553 [124/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.553 [125/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.553 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.553 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.553 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.553 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.553 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.553 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.553 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.553 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.553 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.553 [135/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.553 [136/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.553 [137/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.553 [138/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.553 [139/267] Linking static target lib/librte_net.a 00:02:12.553 [140/267] Linking static target lib/librte_compressdev.a 00:02:12.553 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.553 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.553 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.553 [144/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.553 [145/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.553 [146/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.553 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.553 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.553 [149/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.553 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:12.553 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.553 [152/267] Linking static target lib/librte_mempool.a 00:02:12.553 [153/267] Linking static target lib/librte_power.a 00:02:12.553 [154/267] Linking static target lib/librte_dmadev.a 00:02:12.554 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.554 [156/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.554 [157/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.554 [158/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.554 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.554 [160/267] Linking static target lib/librte_security.a 00:02:12.554 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.554 [162/267] Linking target lib/librte_log.so.24.1 00:02:12.554 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.554 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.554 [165/267] Linking static target lib/librte_rcu.a 00:02:12.554 [166/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.554 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.554 [168/267] Linking static target drivers/librte_bus_vdev.a 00:02:12.554 [169/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.554 [170/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.554 [171/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.554 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.554 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.554 [174/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.554 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.554 [176/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.554 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:12.554 [178/267] Linking static target lib/librte_reorder.a 00:02:12.554 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.554 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.554 [181/267] Linking static target lib/librte_eal.a 00:02:12.554 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.554 [183/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.554 [184/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.554 [185/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.554 [186/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.554 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.554 [188/267] Linking static target lib/librte_mbuf.a 00:02:12.815 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.815 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.815 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.815 [192/267] Linking target lib/librte_kvargs.so.24.1 00:02:12.815 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.815 [194/267] Linking static target lib/librte_hash.a 00:02:12.815 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.815 [196/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.815 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.815 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.815 [199/267] Linking static target drivers/librte_bus_pci.a 00:02:12.815 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.815 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.815 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.815 [203/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.815 [204/267] Linking static target lib/librte_cryptodev.a 00:02:12.815 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.815 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.815 [207/267] Linking static target drivers/librte_mempool_ring.a 00:02:12.815 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.815 [209/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.815 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.076 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.076 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:13.076 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.076 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.076 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.335 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.335 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.335 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.335 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.335 [220/267] Linking static target lib/librte_ethdev.a 00:02:13.595 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.595 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.595 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.595 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.595 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.595 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.167 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:14.167 [228/267] Linking static target lib/librte_vhost.a 00:02:15.136 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.517 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.099 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.481 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.481 [233/267] Linking target lib/librte_eal.so.24.1 00:02:24.481 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.481 [235/267] Linking target lib/librte_ring.so.24.1 00:02:24.481 [236/267] Linking target lib/librte_pci.so.24.1 00:02:24.481 [237/267] Linking target lib/librte_meter.so.24.1 00:02:24.481 [238/267] Linking target lib/librte_timer.so.24.1 00:02:24.481 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:24.481 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.481 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.481 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.481 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.742 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.742 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.742 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.742 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:24.742 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:24.742 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.742 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.742 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.742 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.001 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.001 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:25.001 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:25.001 [256/267] Linking target lib/librte_net.so.24.1 00:02:25.001 [257/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.261 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.261 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.261 [260/267] Linking target lib/librte_hash.so.24.1 00:02:25.261 [261/267] Linking target lib/librte_security.so.24.1 00:02:25.261 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:25.261 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:25.261 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.521 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.521 [266/267] Linking target lib/librte_power.so.24.1 00:02:25.521 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:25.521 INFO: autodetecting backend as ninja 00:02:25.521 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:26.460 CC lib/log/log.o 00:02:26.460 CC lib/log/log_deprecated.o 00:02:26.460 CC lib/log/log_flags.o 00:02:26.460 CC lib/ut_mock/mock.o 00:02:26.460 CC lib/ut/ut.o 00:02:26.719 LIB libspdk_log.a 00:02:26.719 LIB libspdk_ut.a 00:02:26.719 LIB libspdk_ut_mock.a 00:02:26.719 SO libspdk_ut.so.2.0 00:02:26.719 SO libspdk_log.so.7.0 00:02:26.719 SO libspdk_ut_mock.so.6.0 00:02:26.719 SYMLINK libspdk_ut.so 00:02:26.719 SYMLINK libspdk_log.so 00:02:26.719 SYMLINK libspdk_ut_mock.so 00:02:27.289 CC lib/ioat/ioat.o 00:02:27.289 CC lib/dma/dma.o 00:02:27.289 CC lib/util/base64.o 00:02:27.289 CC lib/util/bit_array.o 00:02:27.289 CC lib/util/cpuset.o 00:02:27.289 CC lib/util/crc16.o 00:02:27.289 CC lib/util/crc32.o 00:02:27.289 CXX lib/trace_parser/trace.o 00:02:27.289 CC lib/util/crc32c.o 00:02:27.289 CC lib/util/crc32_ieee.o 00:02:27.289 CC lib/util/crc64.o 00:02:27.289 CC lib/util/dif.o 00:02:27.289 CC lib/util/fd.o 00:02:27.289 CC lib/util/file.o 00:02:27.289 CC lib/util/hexlify.o 00:02:27.289 CC lib/util/iov.o 00:02:27.289 CC lib/util/math.o 00:02:27.289 CC lib/util/pipe.o 00:02:27.289 CC lib/util/strerror_tls.o 00:02:27.289 CC lib/util/string.o 00:02:27.289 CC lib/util/uuid.o 00:02:27.289 CC lib/util/fd_group.o 00:02:27.289 CC lib/util/zipf.o 00:02:27.289 CC lib/util/xor.o 00:02:27.289 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.289 CC lib/vfio_user/host/vfio_user.o 00:02:27.289 LIB libspdk_dma.a 00:02:27.289 SO libspdk_dma.so.4.0 00:02:27.549 LIB libspdk_ioat.a 00:02:27.549 SO libspdk_ioat.so.7.0 00:02:27.549 SYMLINK libspdk_dma.so 00:02:27.549 SYMLINK libspdk_ioat.so 00:02:27.549 LIB libspdk_vfio_user.a 00:02:27.549 SO libspdk_vfio_user.so.5.0 00:02:27.549 LIB libspdk_util.a 00:02:27.810 SYMLINK libspdk_vfio_user.so 00:02:27.810 SO libspdk_util.so.9.1 00:02:27.810 SYMLINK libspdk_util.so 00:02:28.071 LIB libspdk_trace_parser.a 00:02:28.071 SO libspdk_trace_parser.so.5.0 00:02:28.071 SYMLINK libspdk_trace_parser.so 00:02:28.332 CC lib/idxd/idxd.o 00:02:28.332 CC lib/idxd/idxd_user.o 00:02:28.332 CC lib/idxd/idxd_kernel.o 00:02:28.332 CC lib/conf/conf.o 00:02:28.332 CC lib/env_dpdk/env.o 00:02:28.332 CC lib/rdma_utils/rdma_utils.o 00:02:28.332 CC lib/env_dpdk/memory.o 00:02:28.332 CC lib/env_dpdk/pci.o 00:02:28.332 CC lib/rdma_provider/common.o 00:02:28.332 CC lib/env_dpdk/init.o 00:02:28.332 CC lib/json/json_parse.o 00:02:28.332 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.332 CC lib/env_dpdk/threads.o 00:02:28.332 CC lib/env_dpdk/pci_ioat.o 00:02:28.332 CC lib/json/json_util.o 00:02:28.332 CC lib/env_dpdk/pci_virtio.o 00:02:28.332 CC lib/env_dpdk/pci_idxd.o 00:02:28.332 CC lib/env_dpdk/pci_vmd.o 00:02:28.332 CC lib/json/json_write.o 00:02:28.332 CC lib/vmd/vmd.o 00:02:28.332 CC lib/vmd/led.o 00:02:28.332 CC lib/env_dpdk/pci_event.o 00:02:28.332 CC lib/env_dpdk/sigbus_handler.o 00:02:28.332 CC lib/env_dpdk/pci_dpdk.o 00:02:28.332 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.332 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.332 LIB libspdk_conf.a 00:02:28.332 LIB libspdk_rdma_provider.a 00:02:28.593 SO libspdk_conf.so.6.0 00:02:28.593 SO libspdk_rdma_provider.so.6.0 00:02:28.593 LIB libspdk_rdma_utils.a 00:02:28.593 SYMLINK libspdk_conf.so 00:02:28.593 SYMLINK libspdk_rdma_provider.so 00:02:28.593 LIB libspdk_json.a 00:02:28.593 SO libspdk_rdma_utils.so.1.0 00:02:28.593 SO libspdk_json.so.6.0 00:02:28.593 SYMLINK libspdk_rdma_utils.so 00:02:28.593 SYMLINK libspdk_json.so 00:02:28.854 LIB libspdk_idxd.a 00:02:28.854 SO libspdk_idxd.so.12.0 00:02:28.854 LIB libspdk_vmd.a 00:02:28.854 SYMLINK libspdk_idxd.so 00:02:28.854 SO libspdk_vmd.so.6.0 00:02:28.854 SYMLINK libspdk_vmd.so 00:02:29.116 CC lib/jsonrpc/jsonrpc_server.o 00:02:29.116 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:29.116 CC lib/jsonrpc/jsonrpc_client.o 00:02:29.116 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:29.377 LIB libspdk_jsonrpc.a 00:02:29.377 SO libspdk_jsonrpc.so.6.0 00:02:29.377 SYMLINK libspdk_jsonrpc.so 00:02:29.377 LIB libspdk_env_dpdk.a 00:02:29.637 SO libspdk_env_dpdk.so.14.1 00:02:29.637 SYMLINK libspdk_env_dpdk.so 00:02:29.637 CC lib/rpc/rpc.o 00:02:29.897 LIB libspdk_rpc.a 00:02:29.897 SO libspdk_rpc.so.6.0 00:02:30.159 SYMLINK libspdk_rpc.so 00:02:30.420 CC lib/notify/notify.o 00:02:30.420 CC lib/notify/notify_rpc.o 00:02:30.420 CC lib/trace/trace.o 00:02:30.420 CC lib/trace/trace_flags.o 00:02:30.420 CC lib/keyring/keyring.o 00:02:30.420 CC lib/trace/trace_rpc.o 00:02:30.420 CC lib/keyring/keyring_rpc.o 00:02:30.680 LIB libspdk_notify.a 00:02:30.680 SO libspdk_notify.so.6.0 00:02:30.680 LIB libspdk_keyring.a 00:02:30.680 LIB libspdk_trace.a 00:02:30.680 SO libspdk_keyring.so.1.0 00:02:30.680 SYMLINK libspdk_notify.so 00:02:30.680 SO libspdk_trace.so.10.0 00:02:30.680 SYMLINK libspdk_keyring.so 00:02:30.680 SYMLINK libspdk_trace.so 00:02:31.251 CC lib/thread/thread.o 00:02:31.251 CC lib/thread/iobuf.o 00:02:31.251 CC lib/sock/sock.o 00:02:31.251 CC lib/sock/sock_rpc.o 00:02:31.512 LIB libspdk_sock.a 00:02:31.512 SO libspdk_sock.so.10.0 00:02:31.512 SYMLINK libspdk_sock.so 00:02:32.083 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.083 CC lib/nvme/nvme_ctrlr.o 00:02:32.083 CC lib/nvme/nvme_fabric.o 00:02:32.083 CC lib/nvme/nvme_ns_cmd.o 00:02:32.083 CC lib/nvme/nvme_ns.o 00:02:32.083 CC lib/nvme/nvme_pcie_common.o 00:02:32.083 CC lib/nvme/nvme_pcie.o 00:02:32.083 CC lib/nvme/nvme_qpair.o 00:02:32.083 CC lib/nvme/nvme.o 00:02:32.083 CC lib/nvme/nvme_discovery.o 00:02:32.083 CC lib/nvme/nvme_quirks.o 00:02:32.083 CC lib/nvme/nvme_transport.o 00:02:32.083 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.083 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.083 CC lib/nvme/nvme_tcp.o 00:02:32.083 CC lib/nvme/nvme_opal.o 00:02:32.083 CC lib/nvme/nvme_io_msg.o 00:02:32.083 CC lib/nvme/nvme_poll_group.o 00:02:32.083 CC lib/nvme/nvme_zns.o 00:02:32.083 CC lib/nvme/nvme_stubs.o 00:02:32.083 CC lib/nvme/nvme_auth.o 00:02:32.083 CC lib/nvme/nvme_cuse.o 00:02:32.083 CC lib/nvme/nvme_vfio_user.o 00:02:32.083 CC lib/nvme/nvme_rdma.o 00:02:32.343 LIB libspdk_thread.a 00:02:32.343 SO libspdk_thread.so.10.1 00:02:32.603 SYMLINK libspdk_thread.so 00:02:32.863 CC lib/vfu_tgt/tgt_endpoint.o 00:02:32.863 CC lib/vfu_tgt/tgt_rpc.o 00:02:32.863 CC lib/blob/blobstore.o 00:02:32.863 CC lib/blob/request.o 00:02:32.863 CC lib/blob/zeroes.o 00:02:32.863 CC lib/blob/blob_bs_dev.o 00:02:32.863 CC lib/init/json_config.o 00:02:32.863 CC lib/accel/accel.o 00:02:32.863 CC lib/init/subsystem.o 00:02:32.863 CC lib/accel/accel_rpc.o 00:02:32.863 CC lib/init/subsystem_rpc.o 00:02:32.863 CC lib/accel/accel_sw.o 00:02:32.863 CC lib/init/rpc.o 00:02:32.863 CC lib/virtio/virtio.o 00:02:32.863 CC lib/virtio/virtio_vhost_user.o 00:02:32.863 CC lib/virtio/virtio_vfio_user.o 00:02:32.863 CC lib/virtio/virtio_pci.o 00:02:33.123 LIB libspdk_init.a 00:02:33.123 LIB libspdk_vfu_tgt.a 00:02:33.123 SO libspdk_init.so.5.0 00:02:33.123 SO libspdk_vfu_tgt.so.3.0 00:02:33.123 LIB libspdk_virtio.a 00:02:33.123 SYMLINK libspdk_init.so 00:02:33.123 SO libspdk_virtio.so.7.0 00:02:33.123 SYMLINK libspdk_vfu_tgt.so 00:02:33.383 SYMLINK libspdk_virtio.so 00:02:33.643 CC lib/event/app.o 00:02:33.643 CC lib/event/reactor.o 00:02:33.643 CC lib/event/log_rpc.o 00:02:33.643 CC lib/event/app_rpc.o 00:02:33.643 CC lib/event/scheduler_static.o 00:02:33.643 LIB libspdk_accel.a 00:02:33.643 SO libspdk_accel.so.15.1 00:02:33.643 LIB libspdk_nvme.a 00:02:33.904 SYMLINK libspdk_accel.so 00:02:33.904 SO libspdk_nvme.so.13.1 00:02:33.904 LIB libspdk_event.a 00:02:33.904 SO libspdk_event.so.14.0 00:02:34.164 SYMLINK libspdk_event.so 00:02:34.165 CC lib/bdev/bdev.o 00:02:34.165 CC lib/bdev/bdev_rpc.o 00:02:34.165 CC lib/bdev/bdev_zone.o 00:02:34.165 CC lib/bdev/part.o 00:02:34.165 CC lib/bdev/scsi_nvme.o 00:02:34.165 SYMLINK libspdk_nvme.so 00:02:35.545 LIB libspdk_blob.a 00:02:35.545 SO libspdk_blob.so.11.0 00:02:35.545 SYMLINK libspdk_blob.so 00:02:35.806 CC lib/blobfs/blobfs.o 00:02:35.806 CC lib/blobfs/tree.o 00:02:35.806 CC lib/lvol/lvol.o 00:02:36.377 LIB libspdk_bdev.a 00:02:36.377 SO libspdk_bdev.so.15.1 00:02:36.638 SYMLINK libspdk_bdev.so 00:02:36.638 LIB libspdk_blobfs.a 00:02:36.638 SO libspdk_blobfs.so.10.0 00:02:36.638 LIB libspdk_lvol.a 00:02:36.638 SYMLINK libspdk_blobfs.so 00:02:36.638 SO libspdk_lvol.so.10.0 00:02:36.898 SYMLINK libspdk_lvol.so 00:02:36.898 CC lib/nvmf/ctrlr.o 00:02:36.898 CC lib/nvmf/ctrlr_discovery.o 00:02:36.898 CC lib/nvmf/subsystem.o 00:02:36.898 CC lib/nvmf/ctrlr_bdev.o 00:02:36.898 CC lib/nvmf/nvmf.o 00:02:36.898 CC lib/nvmf/nvmf_rpc.o 00:02:36.898 CC lib/scsi/dev.o 00:02:36.898 CC lib/nvmf/transport.o 00:02:36.898 CC lib/scsi/lun.o 00:02:36.898 CC lib/nvmf/tcp.o 00:02:36.898 CC lib/scsi/port.o 00:02:36.898 CC lib/nvmf/stubs.o 00:02:36.898 CC lib/scsi/scsi.o 00:02:36.898 CC lib/nvmf/mdns_server.o 00:02:36.898 CC lib/scsi/scsi_bdev.o 00:02:36.898 CC lib/nvmf/vfio_user.o 00:02:36.898 CC lib/scsi/scsi_pr.o 00:02:36.898 CC lib/scsi/scsi_rpc.o 00:02:36.898 CC lib/nvmf/rdma.o 00:02:36.898 CC lib/nbd/nbd.o 00:02:36.898 CC lib/nvmf/auth.o 00:02:36.898 CC lib/nbd/nbd_rpc.o 00:02:36.898 CC lib/scsi/task.o 00:02:36.898 CC lib/ublk/ublk.o 00:02:36.898 CC lib/ftl/ftl_core.o 00:02:36.898 CC lib/ublk/ublk_rpc.o 00:02:36.898 CC lib/ftl/ftl_init.o 00:02:36.898 CC lib/ftl/ftl_layout.o 00:02:36.898 CC lib/ftl/ftl_debug.o 00:02:36.898 CC lib/ftl/ftl_io.o 00:02:36.898 CC lib/ftl/ftl_sb.o 00:02:36.898 CC lib/ftl/ftl_l2p.o 00:02:36.898 CC lib/ftl/ftl_l2p_flat.o 00:02:36.898 CC lib/ftl/ftl_nv_cache.o 00:02:36.898 CC lib/ftl/ftl_band.o 00:02:36.898 CC lib/ftl/ftl_band_ops.o 00:02:36.898 CC lib/ftl/ftl_writer.o 00:02:36.898 CC lib/ftl/ftl_rq.o 00:02:36.898 CC lib/ftl/ftl_reloc.o 00:02:36.898 CC lib/ftl/ftl_l2p_cache.o 00:02:36.898 CC lib/ftl/ftl_p2l.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.898 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.898 CC lib/ftl/utils/ftl_conf.o 00:02:36.898 CC lib/ftl/utils/ftl_md.o 00:02:36.898 CC lib/ftl/utils/ftl_mempool.o 00:02:36.898 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.898 CC lib/ftl/utils/ftl_property.o 00:02:36.898 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.898 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.898 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.898 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.898 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.898 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.898 CC lib/ftl/base/ftl_base_dev.o 00:02:36.898 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.898 CC lib/ftl/ftl_trace.o 00:02:37.468 LIB libspdk_nbd.a 00:02:37.468 LIB libspdk_scsi.a 00:02:37.468 SO libspdk_nbd.so.7.0 00:02:37.468 SO libspdk_scsi.so.9.0 00:02:37.468 SYMLINK libspdk_nbd.so 00:02:37.468 SYMLINK libspdk_scsi.so 00:02:37.468 LIB libspdk_ublk.a 00:02:37.729 SO libspdk_ublk.so.3.0 00:02:37.729 SYMLINK libspdk_ublk.so 00:02:37.991 LIB libspdk_ftl.a 00:02:37.991 CC lib/vhost/vhost.o 00:02:37.991 CC lib/vhost/vhost_rpc.o 00:02:37.991 CC lib/vhost/vhost_scsi.o 00:02:37.991 CC lib/vhost/vhost_blk.o 00:02:37.991 CC lib/vhost/rte_vhost_user.o 00:02:37.991 CC lib/iscsi/conn.o 00:02:37.991 CC lib/iscsi/init_grp.o 00:02:37.991 CC lib/iscsi/iscsi.o 00:02:37.991 CC lib/iscsi/md5.o 00:02:37.991 CC lib/iscsi/param.o 00:02:37.991 CC lib/iscsi/portal_grp.o 00:02:37.991 CC lib/iscsi/tgt_node.o 00:02:37.991 CC lib/iscsi/iscsi_subsystem.o 00:02:37.991 CC lib/iscsi/iscsi_rpc.o 00:02:37.991 CC lib/iscsi/task.o 00:02:37.991 SO libspdk_ftl.so.9.0 00:02:38.596 SYMLINK libspdk_ftl.so 00:02:38.596 LIB libspdk_nvmf.a 00:02:38.857 SO libspdk_nvmf.so.18.1 00:02:38.857 LIB libspdk_vhost.a 00:02:38.857 SO libspdk_vhost.so.8.0 00:02:39.118 SYMLINK libspdk_nvmf.so 00:02:39.118 SYMLINK libspdk_vhost.so 00:02:39.118 LIB libspdk_iscsi.a 00:02:39.118 SO libspdk_iscsi.so.8.0 00:02:39.378 SYMLINK libspdk_iscsi.so 00:02:39.950 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.950 CC module/vfu_device/vfu_virtio.o 00:02:39.950 CC module/vfu_device/vfu_virtio_rpc.o 00:02:39.950 CC module/vfu_device/vfu_virtio_blk.o 00:02:39.950 CC module/vfu_device/vfu_virtio_scsi.o 00:02:39.950 LIB libspdk_env_dpdk_rpc.a 00:02:39.950 CC module/blob/bdev/blob_bdev.o 00:02:39.950 CC module/accel/iaa/accel_iaa.o 00:02:39.950 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.950 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.950 CC module/accel/ioat/accel_ioat.o 00:02:39.950 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.950 CC module/accel/error/accel_error.o 00:02:39.950 CC module/accel/error/accel_error_rpc.o 00:02:39.950 CC module/accel/dsa/accel_dsa.o 00:02:39.950 CC module/keyring/linux/keyring.o 00:02:39.950 CC module/keyring/linux/keyring_rpc.o 00:02:39.950 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.950 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.950 CC module/sock/posix/posix.o 00:02:40.211 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.211 CC module/keyring/file/keyring.o 00:02:40.211 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.211 CC module/keyring/file/keyring_rpc.o 00:02:40.211 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.211 LIB libspdk_keyring_linux.a 00:02:40.211 LIB libspdk_scheduler_gscheduler.a 00:02:40.211 LIB libspdk_accel_error.a 00:02:40.211 LIB libspdk_accel_iaa.a 00:02:40.211 LIB libspdk_keyring_file.a 00:02:40.211 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.211 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.211 LIB libspdk_accel_ioat.a 00:02:40.211 LIB libspdk_scheduler_dynamic.a 00:02:40.211 SO libspdk_keyring_linux.so.1.0 00:02:40.211 SO libspdk_accel_iaa.so.3.0 00:02:40.211 SO libspdk_accel_error.so.2.0 00:02:40.211 SO libspdk_keyring_file.so.1.0 00:02:40.211 SO libspdk_accel_ioat.so.6.0 00:02:40.211 LIB libspdk_blob_bdev.a 00:02:40.473 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.473 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.473 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.473 LIB libspdk_accel_dsa.a 00:02:40.473 SO libspdk_blob_bdev.so.11.0 00:02:40.473 SYMLINK libspdk_keyring_linux.so 00:02:40.473 SYMLINK libspdk_accel_iaa.so 00:02:40.473 SYMLINK libspdk_accel_error.so 00:02:40.473 SO libspdk_accel_dsa.so.5.0 00:02:40.473 SYMLINK libspdk_keyring_file.so 00:02:40.473 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.473 SYMLINK libspdk_accel_ioat.so 00:02:40.473 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.473 SYMLINK libspdk_blob_bdev.so 00:02:40.473 LIB libspdk_vfu_device.a 00:02:40.473 SYMLINK libspdk_accel_dsa.so 00:02:40.473 SO libspdk_vfu_device.so.3.0 00:02:40.733 SYMLINK libspdk_vfu_device.so 00:02:40.733 LIB libspdk_sock_posix.a 00:02:40.733 SO libspdk_sock_posix.so.6.0 00:02:40.993 SYMLINK libspdk_sock_posix.so 00:02:40.993 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.993 CC module/bdev/nvme/bdev_nvme.o 00:02:40.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.993 CC module/bdev/nvme/nvme_rpc.o 00:02:40.993 CC module/bdev/delay/vbdev_delay.o 00:02:40.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.993 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.993 CC module/bdev/nvme/vbdev_opal.o 00:02:40.993 CC module/bdev/error/vbdev_error.o 00:02:40.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.993 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.993 CC module/bdev/gpt/gpt.o 00:02:40.993 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.993 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.993 CC module/bdev/split/vbdev_split.o 00:02:40.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.993 CC module/bdev/null/bdev_null.o 00:02:40.993 CC module/bdev/malloc/bdev_malloc.o 00:02:40.993 CC module/bdev/null/bdev_null_rpc.o 00:02:40.993 CC module/bdev/ftl/bdev_ftl.o 00:02:40.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.993 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.993 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.993 CC module/bdev/aio/bdev_aio.o 00:02:40.993 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.993 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.993 CC module/bdev/raid/bdev_raid.o 00:02:40.993 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.993 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.993 CC module/bdev/raid/raid0.o 00:02:40.993 CC module/bdev/raid/raid1.o 00:02:40.993 CC module/bdev/raid/concat.o 00:02:40.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.254 LIB libspdk_blobfs_bdev.a 00:02:41.254 SO libspdk_blobfs_bdev.so.6.0 00:02:41.254 LIB libspdk_bdev_gpt.a 00:02:41.254 LIB libspdk_bdev_split.a 00:02:41.254 LIB libspdk_bdev_error.a 00:02:41.254 LIB libspdk_bdev_null.a 00:02:41.254 SO libspdk_bdev_gpt.so.6.0 00:02:41.254 SYMLINK libspdk_blobfs_bdev.so 00:02:41.254 SO libspdk_bdev_split.so.6.0 00:02:41.515 LIB libspdk_bdev_ftl.a 00:02:41.515 SO libspdk_bdev_error.so.6.0 00:02:41.515 SO libspdk_bdev_null.so.6.0 00:02:41.515 SYMLINK libspdk_bdev_gpt.so 00:02:41.515 LIB libspdk_bdev_passthru.a 00:02:41.515 LIB libspdk_bdev_malloc.a 00:02:41.515 LIB libspdk_bdev_zone_block.a 00:02:41.515 LIB libspdk_bdev_delay.a 00:02:41.515 SO libspdk_bdev_ftl.so.6.0 00:02:41.515 SYMLINK libspdk_bdev_split.so 00:02:41.515 LIB libspdk_bdev_aio.a 00:02:41.515 LIB libspdk_bdev_iscsi.a 00:02:41.515 SO libspdk_bdev_malloc.so.6.0 00:02:41.515 SYMLINK libspdk_bdev_error.so 00:02:41.515 SO libspdk_bdev_passthru.so.6.0 00:02:41.515 SO libspdk_bdev_zone_block.so.6.0 00:02:41.515 SYMLINK libspdk_bdev_null.so 00:02:41.515 SO libspdk_bdev_delay.so.6.0 00:02:41.515 SO libspdk_bdev_aio.so.6.0 00:02:41.515 SO libspdk_bdev_iscsi.so.6.0 00:02:41.515 SYMLINK libspdk_bdev_ftl.so 00:02:41.515 SYMLINK libspdk_bdev_malloc.so 00:02:41.515 SYMLINK libspdk_bdev_passthru.so 00:02:41.515 SYMLINK libspdk_bdev_zone_block.so 00:02:41.515 SYMLINK libspdk_bdev_delay.so 00:02:41.515 SYMLINK libspdk_bdev_aio.so 00:02:41.515 SYMLINK libspdk_bdev_iscsi.so 00:02:41.515 LIB libspdk_bdev_lvol.a 00:02:41.515 LIB libspdk_bdev_virtio.a 00:02:41.515 SO libspdk_bdev_lvol.so.6.0 00:02:41.515 SO libspdk_bdev_virtio.so.6.0 00:02:41.776 SYMLINK libspdk_bdev_lvol.so 00:02:41.776 SYMLINK libspdk_bdev_virtio.so 00:02:42.036 LIB libspdk_bdev_raid.a 00:02:42.036 SO libspdk_bdev_raid.so.6.0 00:02:42.036 SYMLINK libspdk_bdev_raid.so 00:02:42.978 LIB libspdk_bdev_nvme.a 00:02:42.978 SO libspdk_bdev_nvme.so.7.0 00:02:43.238 SYMLINK libspdk_bdev_nvme.so 00:02:43.809 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.809 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.809 CC module/event/subsystems/vmd/vmd.o 00:02:43.809 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.809 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.809 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.809 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:43.809 CC module/event/subsystems/keyring/keyring.o 00:02:43.809 CC module/event/subsystems/sock/sock.o 00:02:44.070 LIB libspdk_event_iobuf.a 00:02:44.070 LIB libspdk_event_vhost_blk.a 00:02:44.070 LIB libspdk_event_vmd.a 00:02:44.070 LIB libspdk_event_keyring.a 00:02:44.070 LIB libspdk_event_vfu_tgt.a 00:02:44.070 LIB libspdk_event_scheduler.a 00:02:44.070 LIB libspdk_event_sock.a 00:02:44.070 SO libspdk_event_iobuf.so.3.0 00:02:44.070 SO libspdk_event_vhost_blk.so.3.0 00:02:44.070 SO libspdk_event_scheduler.so.4.0 00:02:44.070 SO libspdk_event_vmd.so.6.0 00:02:44.070 SO libspdk_event_vfu_tgt.so.3.0 00:02:44.070 SO libspdk_event_keyring.so.1.0 00:02:44.070 SO libspdk_event_sock.so.5.0 00:02:44.070 SYMLINK libspdk_event_iobuf.so 00:02:44.070 SYMLINK libspdk_event_vhost_blk.so 00:02:44.070 SYMLINK libspdk_event_scheduler.so 00:02:44.070 SYMLINK libspdk_event_vfu_tgt.so 00:02:44.070 SYMLINK libspdk_event_keyring.so 00:02:44.070 SYMLINK libspdk_event_vmd.so 00:02:44.070 SYMLINK libspdk_event_sock.so 00:02:44.331 CC module/event/subsystems/accel/accel.o 00:02:44.591 LIB libspdk_event_accel.a 00:02:44.592 SO libspdk_event_accel.so.6.0 00:02:44.592 SYMLINK libspdk_event_accel.so 00:02:45.161 CC module/event/subsystems/bdev/bdev.o 00:02:45.161 LIB libspdk_event_bdev.a 00:02:45.161 SO libspdk_event_bdev.so.6.0 00:02:45.422 SYMLINK libspdk_event_bdev.so 00:02:45.682 CC module/event/subsystems/ublk/ublk.o 00:02:45.682 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.682 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.682 CC module/event/subsystems/scsi/scsi.o 00:02:45.682 CC module/event/subsystems/nbd/nbd.o 00:02:45.961 LIB libspdk_event_ublk.a 00:02:45.961 LIB libspdk_event_nbd.a 00:02:45.961 LIB libspdk_event_scsi.a 00:02:45.961 SO libspdk_event_ublk.so.3.0 00:02:45.961 SO libspdk_event_nbd.so.6.0 00:02:45.961 LIB libspdk_event_nvmf.a 00:02:45.961 SO libspdk_event_scsi.so.6.0 00:02:45.961 SO libspdk_event_nvmf.so.6.0 00:02:45.961 SYMLINK libspdk_event_ublk.so 00:02:45.961 SYMLINK libspdk_event_nbd.so 00:02:45.961 SYMLINK libspdk_event_scsi.so 00:02:45.961 SYMLINK libspdk_event_nvmf.so 00:02:46.222 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.222 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.483 LIB libspdk_event_vhost_scsi.a 00:02:46.483 LIB libspdk_event_iscsi.a 00:02:46.483 SO libspdk_event_vhost_scsi.so.3.0 00:02:46.483 SO libspdk_event_iscsi.so.6.0 00:02:46.483 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.743 SYMLINK libspdk_event_iscsi.so 00:02:46.743 SO libspdk.so.6.0 00:02:46.743 SYMLINK libspdk.so 00:02:47.314 CXX app/trace/trace.o 00:02:47.314 CC app/trace_record/trace_record.o 00:02:47.314 CC app/spdk_top/spdk_top.o 00:02:47.314 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.314 CC app/spdk_nvme_identify/identify.o 00:02:47.314 CC app/spdk_lspci/spdk_lspci.o 00:02:47.314 TEST_HEADER include/spdk/accel_module.h 00:02:47.314 TEST_HEADER include/spdk/accel.h 00:02:47.314 CC test/rpc_client/rpc_client_test.o 00:02:47.314 TEST_HEADER include/spdk/assert.h 00:02:47.314 TEST_HEADER include/spdk/barrier.h 00:02:47.314 TEST_HEADER include/spdk/base64.h 00:02:47.314 TEST_HEADER include/spdk/bdev.h 00:02:47.314 CC app/spdk_nvme_perf/perf.o 00:02:47.314 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.314 TEST_HEADER include/spdk/bdev_module.h 00:02:47.314 TEST_HEADER include/spdk/bit_pool.h 00:02:47.314 TEST_HEADER include/spdk/bit_array.h 00:02:47.314 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.314 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.314 TEST_HEADER include/spdk/blobfs.h 00:02:47.314 TEST_HEADER include/spdk/blob.h 00:02:47.314 TEST_HEADER include/spdk/config.h 00:02:47.314 TEST_HEADER include/spdk/conf.h 00:02:47.314 TEST_HEADER include/spdk/cpuset.h 00:02:47.314 TEST_HEADER include/spdk/crc16.h 00:02:47.314 TEST_HEADER include/spdk/crc32.h 00:02:47.314 TEST_HEADER include/spdk/crc64.h 00:02:47.314 TEST_HEADER include/spdk/dma.h 00:02:47.314 TEST_HEADER include/spdk/dif.h 00:02:47.314 TEST_HEADER include/spdk/endian.h 00:02:47.314 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.314 TEST_HEADER include/spdk/env.h 00:02:47.314 TEST_HEADER include/spdk/event.h 00:02:47.314 TEST_HEADER include/spdk/fd_group.h 00:02:47.314 TEST_HEADER include/spdk/fd.h 00:02:47.314 TEST_HEADER include/spdk/file.h 00:02:47.314 TEST_HEADER include/spdk/ftl.h 00:02:47.314 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.314 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.314 CC app/nvmf_tgt/nvmf_main.o 00:02:47.314 TEST_HEADER include/spdk/hexlify.h 00:02:47.314 TEST_HEADER include/spdk/histogram_data.h 00:02:47.314 TEST_HEADER include/spdk/idxd.h 00:02:47.314 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.314 TEST_HEADER include/spdk/ioat.h 00:02:47.314 TEST_HEADER include/spdk/init.h 00:02:47.314 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.314 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.314 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.314 CC app/spdk_dd/spdk_dd.o 00:02:47.314 TEST_HEADER include/spdk/json.h 00:02:47.314 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.314 TEST_HEADER include/spdk/keyring.h 00:02:47.314 TEST_HEADER include/spdk/keyring_module.h 00:02:47.314 TEST_HEADER include/spdk/likely.h 00:02:47.314 TEST_HEADER include/spdk/lvol.h 00:02:47.314 TEST_HEADER include/spdk/log.h 00:02:47.314 TEST_HEADER include/spdk/memory.h 00:02:47.314 TEST_HEADER include/spdk/notify.h 00:02:47.314 TEST_HEADER include/spdk/mmio.h 00:02:47.314 TEST_HEADER include/spdk/nvme.h 00:02:47.314 TEST_HEADER include/spdk/nbd.h 00:02:47.314 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.314 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.314 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.314 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.314 CC app/spdk_tgt/spdk_tgt.o 00:02:47.314 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.314 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.314 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.314 TEST_HEADER include/spdk/nvmf.h 00:02:47.314 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.314 TEST_HEADER include/spdk/opal.h 00:02:47.314 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.314 TEST_HEADER include/spdk/opal_spec.h 00:02:47.314 TEST_HEADER include/spdk/pipe.h 00:02:47.314 TEST_HEADER include/spdk/pci_ids.h 00:02:47.314 TEST_HEADER include/spdk/queue.h 00:02:47.314 TEST_HEADER include/spdk/reduce.h 00:02:47.314 TEST_HEADER include/spdk/rpc.h 00:02:47.314 TEST_HEADER include/spdk/scheduler.h 00:02:47.314 TEST_HEADER include/spdk/scsi.h 00:02:47.314 TEST_HEADER include/spdk/sock.h 00:02:47.314 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.314 TEST_HEADER include/spdk/string.h 00:02:47.314 TEST_HEADER include/spdk/stdinc.h 00:02:47.314 TEST_HEADER include/spdk/thread.h 00:02:47.314 TEST_HEADER include/spdk/trace.h 00:02:47.314 TEST_HEADER include/spdk/trace_parser.h 00:02:47.314 TEST_HEADER include/spdk/ublk.h 00:02:47.314 TEST_HEADER include/spdk/tree.h 00:02:47.314 TEST_HEADER include/spdk/util.h 00:02:47.314 TEST_HEADER include/spdk/uuid.h 00:02:47.314 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.314 TEST_HEADER include/spdk/version.h 00:02:47.314 TEST_HEADER include/spdk/vhost.h 00:02:47.314 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.314 TEST_HEADER include/spdk/vmd.h 00:02:47.314 TEST_HEADER include/spdk/xor.h 00:02:47.314 CXX test/cpp_headers/accel.o 00:02:47.314 TEST_HEADER include/spdk/zipf.h 00:02:47.314 CXX test/cpp_headers/accel_module.o 00:02:47.314 CXX test/cpp_headers/assert.o 00:02:47.314 CXX test/cpp_headers/barrier.o 00:02:47.314 CXX test/cpp_headers/base64.o 00:02:47.314 CXX test/cpp_headers/bdev_module.o 00:02:47.314 CXX test/cpp_headers/bdev.o 00:02:47.314 CXX test/cpp_headers/bdev_zone.o 00:02:47.314 CXX test/cpp_headers/bit_array.o 00:02:47.314 CXX test/cpp_headers/blob_bdev.o 00:02:47.314 CXX test/cpp_headers/bit_pool.o 00:02:47.314 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.314 CXX test/cpp_headers/blob.o 00:02:47.314 CXX test/cpp_headers/blobfs.o 00:02:47.314 CXX test/cpp_headers/config.o 00:02:47.314 CXX test/cpp_headers/conf.o 00:02:47.314 CXX test/cpp_headers/cpuset.o 00:02:47.314 CXX test/cpp_headers/crc16.o 00:02:47.314 CXX test/cpp_headers/crc64.o 00:02:47.314 CXX test/cpp_headers/crc32.o 00:02:47.314 CXX test/cpp_headers/dif.o 00:02:47.314 CXX test/cpp_headers/env_dpdk.o 00:02:47.314 CXX test/cpp_headers/dma.o 00:02:47.314 CXX test/cpp_headers/endian.o 00:02:47.314 CXX test/cpp_headers/env.o 00:02:47.314 CXX test/cpp_headers/fd.o 00:02:47.314 CXX test/cpp_headers/event.o 00:02:47.314 CXX test/cpp_headers/fd_group.o 00:02:47.314 CXX test/cpp_headers/file.o 00:02:47.314 CXX test/cpp_headers/gpt_spec.o 00:02:47.314 CXX test/cpp_headers/hexlify.o 00:02:47.314 CXX test/cpp_headers/ftl.o 00:02:47.314 CXX test/cpp_headers/histogram_data.o 00:02:47.314 CXX test/cpp_headers/idxd.o 00:02:47.314 CXX test/cpp_headers/idxd_spec.o 00:02:47.314 CXX test/cpp_headers/ioat.o 00:02:47.314 CXX test/cpp_headers/init.o 00:02:47.314 CXX test/cpp_headers/ioat_spec.o 00:02:47.314 CXX test/cpp_headers/iscsi_spec.o 00:02:47.314 CXX test/cpp_headers/json.o 00:02:47.314 CXX test/cpp_headers/jsonrpc.o 00:02:47.314 CXX test/cpp_headers/likely.o 00:02:47.314 CXX test/cpp_headers/keyring_module.o 00:02:47.314 CXX test/cpp_headers/keyring.o 00:02:47.314 CXX test/cpp_headers/lvol.o 00:02:47.314 CXX test/cpp_headers/log.o 00:02:47.314 CXX test/cpp_headers/memory.o 00:02:47.314 CXX test/cpp_headers/mmio.o 00:02:47.314 CXX test/cpp_headers/notify.o 00:02:47.314 CXX test/cpp_headers/nbd.o 00:02:47.314 CXX test/cpp_headers/nvme.o 00:02:47.314 CXX test/cpp_headers/nvme_intel.o 00:02:47.314 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.314 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.314 CXX test/cpp_headers/nvme_spec.o 00:02:47.314 CXX test/cpp_headers/nvme_zns.o 00:02:47.314 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.314 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.314 CC examples/ioat/verify/verify.o 00:02:47.314 CXX test/cpp_headers/nvmf_spec.o 00:02:47.315 CXX test/cpp_headers/opal.o 00:02:47.315 CC examples/util/zipf/zipf.o 00:02:47.315 CC test/env/pci/pci_ut.o 00:02:47.315 CXX test/cpp_headers/nvmf_transport.o 00:02:47.315 CXX test/cpp_headers/nvmf.o 00:02:47.315 CC examples/ioat/perf/perf.o 00:02:47.315 CC test/env/vtophys/vtophys.o 00:02:47.315 CXX test/cpp_headers/opal_spec.o 00:02:47.315 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.315 CXX test/cpp_headers/pci_ids.o 00:02:47.576 CXX test/cpp_headers/pipe.o 00:02:47.576 CXX test/cpp_headers/reduce.o 00:02:47.576 CXX test/cpp_headers/queue.o 00:02:47.576 CXX test/cpp_headers/rpc.o 00:02:47.576 CXX test/cpp_headers/scheduler.o 00:02:47.576 CXX test/cpp_headers/scsi_spec.o 00:02:47.576 CXX test/cpp_headers/sock.o 00:02:47.576 CXX test/cpp_headers/scsi.o 00:02:47.576 LINK spdk_lspci 00:02:47.576 CXX test/cpp_headers/thread.o 00:02:47.576 CXX test/cpp_headers/string.o 00:02:47.576 CXX test/cpp_headers/stdinc.o 00:02:47.576 CXX test/cpp_headers/trace.o 00:02:47.576 CC test/app/jsoncat/jsoncat.o 00:02:47.576 CXX test/cpp_headers/trace_parser.o 00:02:47.576 CC test/app/histogram_perf/histogram_perf.o 00:02:47.576 CXX test/cpp_headers/ublk.o 00:02:47.576 CXX test/cpp_headers/tree.o 00:02:47.576 CXX test/cpp_headers/util.o 00:02:47.576 CC test/thread/poller_perf/poller_perf.o 00:02:47.576 CXX test/cpp_headers/uuid.o 00:02:47.576 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.576 CXX test/cpp_headers/version.o 00:02:47.576 CC test/app/stub/stub.o 00:02:47.576 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.576 CXX test/cpp_headers/xor.o 00:02:47.576 CC test/dma/test_dma/test_dma.o 00:02:47.576 CXX test/cpp_headers/vhost.o 00:02:47.576 CXX test/cpp_headers/vmd.o 00:02:47.576 CXX test/cpp_headers/zipf.o 00:02:47.576 CC test/env/memory/memory_ut.o 00:02:47.576 CC app/fio/nvme/fio_plugin.o 00:02:47.576 LINK spdk_nvme_discover 00:02:47.576 LINK rpc_client_test 00:02:47.576 CC test/app/bdev_svc/bdev_svc.o 00:02:47.576 CC app/fio/bdev/fio_plugin.o 00:02:47.576 LINK nvmf_tgt 00:02:47.576 LINK interrupt_tgt 00:02:47.835 LINK spdk_trace_record 00:02:47.835 LINK iscsi_tgt 00:02:47.835 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.835 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.835 LINK spdk_tgt 00:02:47.835 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.835 LINK verify 00:02:47.835 LINK spdk_dd 00:02:47.835 LINK spdk_trace 00:02:48.093 LINK jsoncat 00:02:48.093 LINK poller_perf 00:02:48.093 LINK histogram_perf 00:02:48.093 LINK vtophys 00:02:48.093 LINK zipf 00:02:48.093 LINK env_dpdk_post_init 00:02:48.093 LINK stub 00:02:48.093 LINK ioat_perf 00:02:48.093 LINK bdev_svc 00:02:48.353 LINK test_dma 00:02:48.353 CC app/vhost/vhost.o 00:02:48.353 LINK vhost_fuzz 00:02:48.353 LINK nvme_fuzz 00:02:48.353 LINK pci_ut 00:02:48.353 LINK spdk_nvme_identify 00:02:48.353 LINK spdk_bdev 00:02:48.613 LINK spdk_nvme 00:02:48.613 LINK spdk_nvme_perf 00:02:48.613 CC test/event/reactor_perf/reactor_perf.o 00:02:48.613 CC test/event/event_perf/event_perf.o 00:02:48.613 LINK mem_callbacks 00:02:48.613 CC examples/idxd/perf/perf.o 00:02:48.613 LINK vhost 00:02:48.613 CC test/event/reactor/reactor.o 00:02:48.613 LINK spdk_top 00:02:48.613 CC examples/vmd/led/led.o 00:02:48.613 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.613 CC test/event/app_repeat/app_repeat.o 00:02:48.613 CC examples/sock/hello_world/hello_sock.o 00:02:48.613 CC test/event/scheduler/scheduler.o 00:02:48.613 CC examples/thread/thread/thread_ex.o 00:02:48.613 LINK reactor_perf 00:02:48.613 LINK event_perf 00:02:48.613 LINK lsvmd 00:02:48.613 LINK led 00:02:48.613 LINK reactor 00:02:48.613 LINK app_repeat 00:02:48.874 LINK memory_ut 00:02:48.874 LINK hello_sock 00:02:48.874 CC test/nvme/sgl/sgl.o 00:02:48.874 CC test/nvme/overhead/overhead.o 00:02:48.874 CC test/nvme/cuse/cuse.o 00:02:48.874 CC test/nvme/reset/reset.o 00:02:48.874 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.874 CC test/nvme/aer/aer.o 00:02:48.874 CC test/nvme/startup/startup.o 00:02:48.874 CC test/nvme/compliance/nvme_compliance.o 00:02:48.874 CC test/nvme/boot_partition/boot_partition.o 00:02:48.874 CC test/nvme/e2edp/nvme_dp.o 00:02:48.874 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.874 CC test/nvme/err_injection/err_injection.o 00:02:48.874 CC test/nvme/reserve/reserve.o 00:02:48.874 CC test/nvme/simple_copy/simple_copy.o 00:02:48.874 CC test/nvme/connect_stress/connect_stress.o 00:02:48.874 LINK scheduler 00:02:48.874 CC test/nvme/fdp/fdp.o 00:02:48.874 LINK idxd_perf 00:02:48.874 CC test/blobfs/mkfs/mkfs.o 00:02:48.874 CC test/accel/dif/dif.o 00:02:48.874 LINK thread 00:02:48.874 CC test/lvol/esnap/esnap.o 00:02:49.134 LINK boot_partition 00:02:49.134 LINK startup 00:02:49.134 LINK connect_stress 00:02:49.134 LINK fused_ordering 00:02:49.134 LINK doorbell_aers 00:02:49.134 LINK err_injection 00:02:49.134 LINK reserve 00:02:49.134 LINK reset 00:02:49.134 LINK overhead 00:02:49.134 LINK sgl 00:02:49.134 LINK simple_copy 00:02:49.134 LINK mkfs 00:02:49.134 LINK aer 00:02:49.134 LINK nvme_dp 00:02:49.134 LINK fdp 00:02:49.134 LINK nvme_compliance 00:02:49.134 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.134 CC examples/nvme/hello_world/hello_world.o 00:02:49.394 CC examples/nvme/abort/abort.o 00:02:49.394 CC examples/nvme/hotplug/hotplug.o 00:02:49.394 CC examples/nvme/arbitration/arbitration.o 00:02:49.394 CC examples/nvme/reconnect/reconnect.o 00:02:49.394 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.394 LINK dif 00:02:49.394 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.394 LINK iscsi_fuzz 00:02:49.394 CC examples/accel/perf/accel_perf.o 00:02:49.394 LINK hello_world 00:02:49.394 LINK pmr_persistence 00:02:49.394 LINK cmb_copy 00:02:49.394 CC examples/blob/hello_world/hello_blob.o 00:02:49.394 CC examples/blob/cli/blobcli.o 00:02:49.654 LINK hotplug 00:02:49.654 LINK reconnect 00:02:49.654 LINK arbitration 00:02:49.654 LINK abort 00:02:49.654 LINK nvme_manage 00:02:49.914 LINK hello_blob 00:02:49.914 CC test/bdev/bdevio/bdevio.o 00:02:49.914 LINK accel_perf 00:02:49.914 LINK cuse 00:02:49.914 LINK blobcli 00:02:50.175 LINK bdevio 00:02:50.436 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.436 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.696 LINK hello_bdev 00:02:51.267 LINK bdevperf 00:02:51.839 CC examples/nvmf/nvmf/nvmf.o 00:02:52.100 LINK nvmf 00:02:53.042 LINK esnap 00:02:53.612 00:02:53.612 real 0m51.129s 00:02:53.612 user 6m34.664s 00:02:53.612 sys 4m14.202s 00:02:53.612 13:48:51 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:53.612 13:48:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:53.612 ************************************ 00:02:53.612 END TEST make 00:02:53.612 ************************************ 00:02:53.612 13:48:51 -- common/autotest_common.sh@1142 -- $ return 0 00:02:53.612 13:48:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.612 13:48:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:53.612 13:48:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:53.612 13:48:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.612 13:48:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.612 13:48:51 -- pm/common@44 -- $ pid=1014052 00:02:53.612 13:48:51 -- pm/common@50 -- $ kill -TERM 1014052 00:02:53.612 13:48:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.612 13:48:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:53.612 13:48:51 -- pm/common@44 -- $ pid=1014053 00:02:53.612 13:48:51 -- pm/common@50 -- $ kill -TERM 1014053 00:02:53.612 13:48:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.612 13:48:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:53.612 13:48:51 -- pm/common@44 -- $ pid=1014055 00:02:53.612 13:48:51 -- pm/common@50 -- $ kill -TERM 1014055 00:02:53.612 13:48:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.612 13:48:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:53.612 13:48:51 -- pm/common@44 -- $ pid=1014079 00:02:53.612 13:48:51 -- pm/common@50 -- $ sudo -E kill -TERM 1014079 00:02:53.612 13:48:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:53.612 13:48:51 -- nvmf/common.sh@7 -- # uname -s 00:02:53.612 13:48:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:53.612 13:48:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:53.612 13:48:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:53.612 13:48:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:53.612 13:48:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:53.612 13:48:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:53.612 13:48:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:53.612 13:48:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:53.612 13:48:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:53.612 13:48:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.612 13:48:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:53.612 13:48:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:53.612 13:48:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.612 13:48:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.612 13:48:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:53.612 13:48:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.612 13:48:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:53.874 13:48:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.874 13:48:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.874 13:48:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.874 13:48:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.874 13:48:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.874 13:48:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.874 13:48:51 -- paths/export.sh@5 -- # export PATH 00:02:53.874 13:48:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.874 13:48:51 -- nvmf/common.sh@47 -- # : 0 00:02:53.874 13:48:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:53.874 13:48:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:53.874 13:48:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.874 13:48:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.874 13:48:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.874 13:48:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:53.874 13:48:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:53.874 13:48:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:53.874 13:48:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.874 13:48:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.874 13:48:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:53.874 13:48:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:53.874 13:48:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.874 13:48:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:53.874 13:48:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.874 13:48:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:53.874 13:48:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:53.874 13:48:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:53.874 13:48:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1077737 00:02:53.874 13:48:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:53.874 13:48:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:53.874 13:48:51 -- pm/common@17 -- # local monitor 00:02:53.874 13:48:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.874 13:48:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.874 13:48:51 -- pm/common@21 -- # date +%s 00:02:53.874 13:48:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.874 13:48:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.874 13:48:51 -- pm/common@21 -- # date +%s 00:02:53.874 13:48:51 -- pm/common@25 -- # sleep 1 00:02:53.874 13:48:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044131 00:02:53.874 13:48:51 -- pm/common@21 -- # date +%s 00:02:53.874 13:48:51 -- pm/common@21 -- # date +%s 00:02:53.874 13:48:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044131 00:02:53.874 13:48:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044131 00:02:53.874 13:48:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044131 00:02:53.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044131_collect-vmstat.pm.log 00:02:53.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044131_collect-cpu-load.pm.log 00:02:53.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044131_collect-cpu-temp.pm.log 00:02:53.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044131_collect-bmc-pm.bmc.pm.log 00:02:54.816 13:48:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:54.816 13:48:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:54.816 13:48:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:54.816 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:02:54.816 13:48:52 -- spdk/autotest.sh@59 -- # create_test_list 00:02:54.816 13:48:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:54.816 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:02:54.816 13:48:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:54.816 13:48:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.816 13:48:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.816 13:48:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:54.816 13:48:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.816 13:48:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:54.816 13:48:52 -- common/autotest_common.sh@1455 -- # uname 00:02:54.816 13:48:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:54.816 13:48:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:54.816 13:48:52 -- common/autotest_common.sh@1475 -- # uname 00:02:54.816 13:48:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:54.816 13:48:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:54.816 13:48:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:54.816 13:48:52 -- spdk/autotest.sh@72 -- # hash lcov 00:02:54.816 13:48:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:54.816 13:48:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:54.816 --rc lcov_branch_coverage=1 00:02:54.816 --rc lcov_function_coverage=1 00:02:54.816 --rc genhtml_branch_coverage=1 00:02:54.816 --rc genhtml_function_coverage=1 00:02:54.816 --rc genhtml_legend=1 00:02:54.816 --rc geninfo_all_blocks=1 00:02:54.816 ' 00:02:54.816 13:48:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:54.816 --rc lcov_branch_coverage=1 00:02:54.816 --rc lcov_function_coverage=1 00:02:54.816 --rc genhtml_branch_coverage=1 00:02:54.816 --rc genhtml_function_coverage=1 00:02:54.816 --rc genhtml_legend=1 00:02:54.816 --rc geninfo_all_blocks=1 00:02:54.816 ' 00:02:54.816 13:48:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:54.816 --rc lcov_branch_coverage=1 00:02:54.816 --rc lcov_function_coverage=1 00:02:54.816 --rc genhtml_branch_coverage=1 00:02:54.816 --rc genhtml_function_coverage=1 00:02:54.816 --rc genhtml_legend=1 00:02:54.816 --rc geninfo_all_blocks=1 00:02:54.816 --no-external' 00:02:54.816 13:48:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:54.816 --rc lcov_branch_coverage=1 00:02:54.816 --rc lcov_function_coverage=1 00:02:54.816 --rc genhtml_branch_coverage=1 00:02:54.816 --rc genhtml_function_coverage=1 00:02:54.816 --rc genhtml_legend=1 00:02:54.816 --rc geninfo_all_blocks=1 00:02:54.816 --no-external' 00:02:54.816 13:48:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:55.076 lcov: LCOV version 1.14 00:02:55.076 13:48:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.056 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:20.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:20.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:20.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:20.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:20.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:20.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:20.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:20.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:20.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:20.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:20.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:20.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:20.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:20.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:20.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:20.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:20.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:20.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:20.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:20.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:20.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:24.156 13:49:21 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:24.156 13:49:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:24.156 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:03:24.156 13:49:21 -- spdk/autotest.sh@91 -- # rm -f 00:03:24.156 13:49:21 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.458 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.458 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:27.719 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.719 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.979 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.979 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.979 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.979 13:49:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:27.979 13:49:25 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.979 13:49:25 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.979 13:49:25 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.979 13:49:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.979 13:49:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.979 13:49:25 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.979 13:49:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.979 13:49:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.979 13:49:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:27.979 13:49:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.979 13:49:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.979 13:49:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:27.979 13:49:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:27.979 13:49:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.979 No valid GPT data, bailing 00:03:27.979 13:49:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.979 13:49:25 -- scripts/common.sh@391 -- # pt= 00:03:27.979 13:49:25 -- scripts/common.sh@392 -- # return 1 00:03:27.979 13:49:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.979 1+0 records in 00:03:27.979 1+0 records out 00:03:27.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00158635 s, 661 MB/s 00:03:27.979 13:49:25 -- spdk/autotest.sh@118 -- # sync 00:03:27.979 13:49:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.979 13:49:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.979 13:49:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.124 13:49:33 -- spdk/autotest.sh@124 -- # uname -s 00:03:36.124 13:49:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:36.124 13:49:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:36.124 13:49:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.124 13:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.124 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 ************************************ 00:03:36.124 START TEST setup.sh 00:03:36.124 ************************************ 00:03:36.124 13:49:33 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:36.124 * Looking for test storage... 00:03:36.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.124 13:49:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:36.124 13:49:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:36.124 13:49:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:36.124 13:49:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.124 13:49:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.124 13:49:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.124 ************************************ 00:03:36.124 START TEST acl 00:03:36.124 ************************************ 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:36.124 * Looking for test storage... 00:03:36.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.124 13:49:34 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:36.124 13:49:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:36.124 13:49:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.124 13:49:34 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.327 13:49:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:40.327 13:49:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:40.327 13:49:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.327 13:49:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:40.327 13:49:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.327 13:49:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:44.530 Hugepages 00:03:44.530 node hugesize free / total 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 00:03:44.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:44.531 13:49:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:44.531 13:49:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.531 13:49:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.531 13:49:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.531 ************************************ 00:03:44.531 START TEST denied 00:03:44.531 ************************************ 00:03:44.531 13:49:42 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:44.531 13:49:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:44.531 13:49:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:44.531 13:49:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:44.531 13:49:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.531 13:49:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.740 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.740 13:49:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.013 00:03:53.013 real 0m9.014s 00:03:53.013 user 0m3.041s 00:03:53.013 sys 0m5.310s 00:03:53.013 13:49:51 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.013 13:49:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:53.013 ************************************ 00:03:53.013 END TEST denied 00:03:53.013 ************************************ 00:03:53.273 13:49:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:53.273 13:49:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:53.273 13:49:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.273 13:49:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.273 13:49:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.273 ************************************ 00:03:53.273 START TEST allowed 00:03:53.273 ************************************ 00:03:53.273 13:49:51 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:53.273 13:49:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:53.273 13:49:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:53.273 13:49:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:53.273 13:49:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.273 13:49:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.562 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:58.562 13:49:56 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:58.562 13:49:56 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:58.562 13:49:56 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:58.562 13:49:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.562 13:49:56 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.763 00:04:02.763 real 0m9.600s 00:04:02.763 user 0m2.751s 00:04:02.763 sys 0m5.047s 00:04:02.763 13:50:00 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.763 13:50:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:02.763 ************************************ 00:04:02.763 END TEST allowed 00:04:02.763 ************************************ 00:04:02.763 13:50:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:02.763 00:04:02.763 real 0m26.761s 00:04:02.763 user 0m8.876s 00:04:02.763 sys 0m15.577s 00:04:02.763 13:50:00 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.763 13:50:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.763 ************************************ 00:04:02.763 END TEST acl 00:04:02.763 ************************************ 00:04:02.763 13:50:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.763 13:50:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.763 13:50:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.763 13:50:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.763 13:50:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.025 ************************************ 00:04:03.025 START TEST hugepages 00:04:03.025 ************************************ 00:04:03.025 13:50:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:03.025 * Looking for test storage... 00:04:03.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 105870856 kB' 'MemAvailable: 110460136 kB' 'Buffers: 2704 kB' 'Cached: 11387772 kB' 'SwapCached: 0 kB' 'Active: 7273476 kB' 'Inactive: 4660676 kB' 'Active(anon): 6882468 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547052 kB' 'Mapped: 200416 kB' 'Shmem: 6338792 kB' 'KReclaimable: 578288 kB' 'Slab: 1362588 kB' 'SReclaimable: 578288 kB' 'SUnreclaim: 784300 kB' 'KernelStack: 27168 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460880 kB' 'Committed_AS: 8411012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237244 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.027 13:50:01 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:03.027 13:50:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.027 13:50:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.027 13:50:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.027 ************************************ 00:04:03.027 START TEST default_setup 00:04:03.027 ************************************ 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.027 13:50:01 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.287 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:07.287 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107974604 kB' 'MemAvailable: 112563568 kB' 'Buffers: 2704 kB' 'Cached: 11387856 kB' 'SwapCached: 0 kB' 'Active: 7289416 kB' 'Inactive: 4660676 kB' 'Active(anon): 6898408 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562356 kB' 'Mapped: 200792 kB' 'Shmem: 6338876 kB' 'KReclaimable: 578224 kB' 'Slab: 1360100 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781876 kB' 'KernelStack: 27232 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8428692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237308 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107975408 kB' 'MemAvailable: 112564372 kB' 'Buffers: 2704 kB' 'Cached: 11387860 kB' 'SwapCached: 0 kB' 'Active: 7288816 kB' 'Inactive: 4660676 kB' 'Active(anon): 6897808 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561680 kB' 'Mapped: 200728 kB' 'Shmem: 6338880 kB' 'KReclaimable: 578224 kB' 'Slab: 1360080 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781856 kB' 'KernelStack: 27232 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8428712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237308 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.289 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.290 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107974992 kB' 'MemAvailable: 112563956 kB' 'Buffers: 2704 kB' 'Cached: 11387876 kB' 'SwapCached: 0 kB' 'Active: 7288320 kB' 'Inactive: 4660676 kB' 'Active(anon): 6897312 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561660 kB' 'Mapped: 200652 kB' 'Shmem: 6338896 kB' 'KReclaimable: 578224 kB' 'Slab: 1360080 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781856 kB' 'KernelStack: 27232 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8428732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237324 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.291 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.293 nr_hugepages=1024 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.293 resv_hugepages=0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.293 surplus_hugepages=0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.293 anon_hugepages=0 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107974992 kB' 'MemAvailable: 112563956 kB' 'Buffers: 2704 kB' 'Cached: 11387880 kB' 'SwapCached: 0 kB' 'Active: 7287964 kB' 'Inactive: 4660676 kB' 'Active(anon): 6896956 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561300 kB' 'Mapped: 200652 kB' 'Shmem: 6338900 kB' 'KReclaimable: 578224 kB' 'Slab: 1360080 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781856 kB' 'KernelStack: 27216 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8430168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237324 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57973568 kB' 'MemUsed: 7685440 kB' 'SwapCached: 0 kB' 'Active: 2281656 kB' 'Inactive: 1033508 kB' 'Active(anon): 2068320 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907032 kB' 'Mapped: 64500 kB' 'AnonPages: 411308 kB' 'Shmem: 1660188 kB' 'KernelStack: 15016 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572992 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.296 node0=1024 expecting 1024 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.296 00:04:07.296 real 0m4.125s 00:04:07.296 user 0m1.586s 00:04:07.296 sys 0m2.539s 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.296 13:50:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:07.296 ************************************ 00:04:07.296 END TEST default_setup 00:04:07.296 ************************************ 00:04:07.296 13:50:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.296 13:50:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:07.296 13:50:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.296 13:50:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.296 13:50:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.296 ************************************ 00:04:07.296 START TEST per_node_1G_alloc 00:04:07.296 ************************************ 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.296 13:50:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.508 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:11.508 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.508 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108005648 kB' 'MemAvailable: 112594612 kB' 'Buffers: 2704 kB' 'Cached: 11388064 kB' 'SwapCached: 0 kB' 'Active: 7287952 kB' 'Inactive: 4660676 kB' 'Active(anon): 6896944 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561000 kB' 'Mapped: 199692 kB' 'Shmem: 6339084 kB' 'KReclaimable: 578224 kB' 'Slab: 1359328 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781104 kB' 'KernelStack: 27312 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8421720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237516 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.509 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108008144 kB' 'MemAvailable: 112597108 kB' 'Buffers: 2704 kB' 'Cached: 11388068 kB' 'SwapCached: 0 kB' 'Active: 7287696 kB' 'Inactive: 4660676 kB' 'Active(anon): 6896688 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560764 kB' 'Mapped: 199684 kB' 'Shmem: 6339088 kB' 'KReclaimable: 578224 kB' 'Slab: 1359320 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781096 kB' 'KernelStack: 27264 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8421740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237596 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.510 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.511 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108010056 kB' 'MemAvailable: 112599020 kB' 'Buffers: 2704 kB' 'Cached: 11388084 kB' 'SwapCached: 0 kB' 'Active: 7287360 kB' 'Inactive: 4660676 kB' 'Active(anon): 6896352 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560404 kB' 'Mapped: 199676 kB' 'Shmem: 6339104 kB' 'KReclaimable: 578224 kB' 'Slab: 1359316 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781092 kB' 'KernelStack: 27296 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8421760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237500 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.512 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.513 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.514 nr_hugepages=1024 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.514 resv_hugepages=0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.514 surplus_hugepages=0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.514 anon_hugepages=0 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108016324 kB' 'MemAvailable: 112605288 kB' 'Buffers: 2704 kB' 'Cached: 11388104 kB' 'SwapCached: 0 kB' 'Active: 7287808 kB' 'Inactive: 4660676 kB' 'Active(anon): 6896800 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560828 kB' 'Mapped: 199676 kB' 'Shmem: 6339124 kB' 'KReclaimable: 578224 kB' 'Slab: 1359316 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781092 kB' 'KernelStack: 27376 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8421784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237532 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.514 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.515 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59056716 kB' 'MemUsed: 6602292 kB' 'SwapCached: 0 kB' 'Active: 2280996 kB' 'Inactive: 1033508 kB' 'Active(anon): 2067660 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907220 kB' 'Mapped: 63888 kB' 'AnonPages: 410404 kB' 'Shmem: 1660376 kB' 'KernelStack: 15080 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572648 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.516 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48959112 kB' 'MemUsed: 11720740 kB' 'SwapCached: 0 kB' 'Active: 5006628 kB' 'Inactive: 3627168 kB' 'Active(anon): 4828956 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3627168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8483616 kB' 'Mapped: 135788 kB' 'AnonPages: 150212 kB' 'Shmem: 4678776 kB' 'KernelStack: 12312 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 380136 kB' 'Slab: 786668 kB' 'SReclaimable: 380136 kB' 'SUnreclaim: 406532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.517 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.518 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.519 node0=512 expecting 512 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:11.519 node1=512 expecting 512 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.519 00:04:11.519 real 0m3.978s 00:04:11.519 user 0m1.525s 00:04:11.519 sys 0m2.514s 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.519 13:50:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.519 ************************************ 00:04:11.519 END TEST per_node_1G_alloc 00:04:11.519 ************************************ 00:04:11.519 13:50:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.519 13:50:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:11.519 13:50:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.519 13:50:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.519 13:50:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.519 ************************************ 00:04:11.519 START TEST even_2G_alloc 00:04:11.519 ************************************ 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.519 13:50:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.819 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:14.819 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108040120 kB' 'MemAvailable: 112629084 kB' 'Buffers: 2704 kB' 'Cached: 11388248 kB' 'SwapCached: 0 kB' 'Active: 7289588 kB' 'Inactive: 4660676 kB' 'Active(anon): 6898580 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562048 kB' 'Mapped: 199752 kB' 'Shmem: 6339268 kB' 'KReclaimable: 578224 kB' 'Slab: 1359240 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781016 kB' 'KernelStack: 27200 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8419704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.819 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.820 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.087 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108040120 kB' 'MemAvailable: 112629084 kB' 'Buffers: 2704 kB' 'Cached: 11388252 kB' 'SwapCached: 0 kB' 'Active: 7289940 kB' 'Inactive: 4660676 kB' 'Active(anon): 6898932 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562408 kB' 'Mapped: 199752 kB' 'Shmem: 6339272 kB' 'KReclaimable: 578224 kB' 'Slab: 1359212 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780988 kB' 'KernelStack: 27184 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8419724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.088 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.089 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108040132 kB' 'MemAvailable: 112629096 kB' 'Buffers: 2704 kB' 'Cached: 11388268 kB' 'SwapCached: 0 kB' 'Active: 7289524 kB' 'Inactive: 4660676 kB' 'Active(anon): 6898516 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562512 kB' 'Mapped: 199692 kB' 'Shmem: 6339288 kB' 'KReclaimable: 578224 kB' 'Slab: 1359244 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781020 kB' 'KernelStack: 27184 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8419744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.090 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.091 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.092 nr_hugepages=1024 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.092 resv_hugepages=0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.092 surplus_hugepages=0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.092 anon_hugepages=0 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.092 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108041556 kB' 'MemAvailable: 112630520 kB' 'Buffers: 2704 kB' 'Cached: 11388292 kB' 'SwapCached: 0 kB' 'Active: 7289572 kB' 'Inactive: 4660676 kB' 'Active(anon): 6898564 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562504 kB' 'Mapped: 199692 kB' 'Shmem: 6339312 kB' 'KReclaimable: 578224 kB' 'Slab: 1359244 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781020 kB' 'KernelStack: 27184 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8419768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.093 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59076864 kB' 'MemUsed: 6582144 kB' 'SwapCached: 0 kB' 'Active: 2280696 kB' 'Inactive: 1033508 kB' 'Active(anon): 2067360 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907356 kB' 'Mapped: 63888 kB' 'AnonPages: 410048 kB' 'Shmem: 1660512 kB' 'KernelStack: 15000 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572456 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.094 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.095 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48965212 kB' 'MemUsed: 11714640 kB' 'SwapCached: 0 kB' 'Active: 5008548 kB' 'Inactive: 3627168 kB' 'Active(anon): 4830876 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3627168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8483660 kB' 'Mapped: 135804 kB' 'AnonPages: 152128 kB' 'Shmem: 4678820 kB' 'KernelStack: 12184 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 380136 kB' 'Slab: 786788 kB' 'SReclaimable: 380136 kB' 'SUnreclaim: 406652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.096 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.097 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.098 node0=512 expecting 512 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:15.098 node1=512 expecting 512 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:15.098 00:04:15.098 real 0m3.783s 00:04:15.098 user 0m1.503s 00:04:15.098 sys 0m2.304s 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.098 13:50:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.098 ************************************ 00:04:15.098 END TEST even_2G_alloc 00:04:15.098 ************************************ 00:04:15.098 13:50:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.098 13:50:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:15.098 13:50:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.098 13:50:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.098 13:50:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.098 ************************************ 00:04:15.098 START TEST odd_alloc 00:04:15.098 ************************************ 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.098 13:50:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.447 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:18.447 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108037960 kB' 'MemAvailable: 112626924 kB' 'Buffers: 2704 kB' 'Cached: 11388424 kB' 'SwapCached: 0 kB' 'Active: 7290548 kB' 'Inactive: 4660676 kB' 'Active(anon): 6899540 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563288 kB' 'Mapped: 199752 kB' 'Shmem: 6339444 kB' 'KReclaimable: 578224 kB' 'Slab: 1358948 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780724 kB' 'KernelStack: 27216 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8420528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237468 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.447 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.448 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108037852 kB' 'MemAvailable: 112626816 kB' 'Buffers: 2704 kB' 'Cached: 11388428 kB' 'SwapCached: 0 kB' 'Active: 7290200 kB' 'Inactive: 4660676 kB' 'Active(anon): 6899192 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562920 kB' 'Mapped: 199716 kB' 'Shmem: 6339448 kB' 'KReclaimable: 578224 kB' 'Slab: 1358948 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780724 kB' 'KernelStack: 27184 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8420548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237452 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.449 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108037880 kB' 'MemAvailable: 112626844 kB' 'Buffers: 2704 kB' 'Cached: 11388444 kB' 'SwapCached: 0 kB' 'Active: 7290148 kB' 'Inactive: 4660676 kB' 'Active(anon): 6899140 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562936 kB' 'Mapped: 199716 kB' 'Shmem: 6339464 kB' 'KReclaimable: 578224 kB' 'Slab: 1358960 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780736 kB' 'KernelStack: 27168 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8420568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.450 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.451 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.452 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:18.718 nr_hugepages=1025 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.718 resv_hugepages=0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.718 surplus_hugepages=0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.718 anon_hugepages=0 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108037572 kB' 'MemAvailable: 112626536 kB' 'Buffers: 2704 kB' 'Cached: 11388464 kB' 'SwapCached: 0 kB' 'Active: 7290160 kB' 'Inactive: 4660676 kB' 'Active(anon): 6899152 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562892 kB' 'Mapped: 199716 kB' 'Shmem: 6339484 kB' 'KReclaimable: 578224 kB' 'Slab: 1358960 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780736 kB' 'KernelStack: 27184 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8420588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.718 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59079544 kB' 'MemUsed: 6579464 kB' 'SwapCached: 0 kB' 'Active: 2280392 kB' 'Inactive: 1033508 kB' 'Active(anon): 2067056 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907452 kB' 'Mapped: 63888 kB' 'AnonPages: 409636 kB' 'Shmem: 1660608 kB' 'KernelStack: 14984 kB' 'PageTables: 4700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572436 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.719 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48957524 kB' 'MemUsed: 11722328 kB' 'SwapCached: 0 kB' 'Active: 5009768 kB' 'Inactive: 3627168 kB' 'Active(anon): 4832096 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3627168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8483716 kB' 'Mapped: 135828 kB' 'AnonPages: 153256 kB' 'Shmem: 4678876 kB' 'KernelStack: 12200 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 380136 kB' 'Slab: 786524 kB' 'SReclaimable: 380136 kB' 'SUnreclaim: 406388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.720 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:18.721 node0=512 expecting 513 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:18.721 node1=513 expecting 512 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:18.721 00:04:18.721 real 0m3.439s 00:04:18.721 user 0m1.176s 00:04:18.721 sys 0m2.192s 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.721 13:50:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.721 ************************************ 00:04:18.721 END TEST odd_alloc 00:04:18.721 ************************************ 00:04:18.721 13:50:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:18.721 13:50:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:18.721 13:50:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.721 13:50:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.721 13:50:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.721 ************************************ 00:04:18.721 START TEST custom_alloc 00:04:18.721 ************************************ 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.721 13:50:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.933 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:22.933 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107003916 kB' 'MemAvailable: 111592880 kB' 'Buffers: 2704 kB' 'Cached: 11388616 kB' 'SwapCached: 0 kB' 'Active: 7291984 kB' 'Inactive: 4660676 kB' 'Active(anon): 6900976 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564908 kB' 'Mapped: 199756 kB' 'Shmem: 6339636 kB' 'KReclaimable: 578224 kB' 'Slab: 1358780 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780556 kB' 'KernelStack: 27152 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8421256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237340 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.933 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107004356 kB' 'MemAvailable: 111593320 kB' 'Buffers: 2704 kB' 'Cached: 11388620 kB' 'SwapCached: 0 kB' 'Active: 7292116 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901108 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565076 kB' 'Mapped: 199756 kB' 'Shmem: 6339640 kB' 'KReclaimable: 578224 kB' 'Slab: 1358764 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780540 kB' 'KernelStack: 27120 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8422516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237308 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.934 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107005672 kB' 'MemAvailable: 111594636 kB' 'Buffers: 2704 kB' 'Cached: 11388636 kB' 'SwapCached: 0 kB' 'Active: 7292348 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901340 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565272 kB' 'Mapped: 199740 kB' 'Shmem: 6339656 kB' 'KReclaimable: 578224 kB' 'Slab: 1358856 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780632 kB' 'KernelStack: 27152 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8424372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237356 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.935 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.936 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:22.937 nr_hugepages=1536 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.937 resv_hugepages=0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.937 surplus_hugepages=0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.937 anon_hugepages=0 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107006308 kB' 'MemAvailable: 111595272 kB' 'Buffers: 2704 kB' 'Cached: 11388676 kB' 'SwapCached: 0 kB' 'Active: 7292548 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901540 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565472 kB' 'Mapped: 199740 kB' 'Shmem: 6339696 kB' 'KReclaimable: 578224 kB' 'Slab: 1358848 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780624 kB' 'KernelStack: 27280 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8424764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237372 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.937 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59087972 kB' 'MemUsed: 6571036 kB' 'SwapCached: 0 kB' 'Active: 2281136 kB' 'Inactive: 1033508 kB' 'Active(anon): 2067800 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907612 kB' 'Mapped: 63888 kB' 'AnonPages: 410476 kB' 'Shmem: 1660768 kB' 'KernelStack: 15000 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572348 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.938 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 47917652 kB' 'MemUsed: 12762200 kB' 'SwapCached: 0 kB' 'Active: 5010992 kB' 'Inactive: 3627168 kB' 'Active(anon): 4833320 kB' 'Inactive(anon): 0 kB' 'Active(file): 177672 kB' 'Inactive(file): 3627168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8483788 kB' 'Mapped: 135852 kB' 'AnonPages: 154552 kB' 'Shmem: 4678948 kB' 'KernelStack: 12264 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 380136 kB' 'Slab: 786468 kB' 'SReclaimable: 380136 kB' 'SUnreclaim: 406332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.939 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.940 node0=512 expecting 512 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:22.940 node1=1024 expecting 1024 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:22.940 00:04:22.940 real 0m4.040s 00:04:22.940 user 0m1.597s 00:04:22.940 sys 0m2.507s 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.940 13:50:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.940 ************************************ 00:04:22.940 END TEST custom_alloc 00:04:22.940 ************************************ 00:04:22.940 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.940 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:22.940 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.940 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.940 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.940 ************************************ 00:04:22.940 START TEST no_shrink_alloc 00:04:22.940 ************************************ 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.940 13:50:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.211 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:27.211 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108069316 kB' 'MemAvailable: 112658280 kB' 'Buffers: 2704 kB' 'Cached: 11388808 kB' 'SwapCached: 0 kB' 'Active: 7292672 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901664 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564512 kB' 'Mapped: 199888 kB' 'Shmem: 6339828 kB' 'KReclaimable: 578224 kB' 'Slab: 1358968 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780744 kB' 'KernelStack: 27248 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8426032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.211 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108069552 kB' 'MemAvailable: 112658516 kB' 'Buffers: 2704 kB' 'Cached: 11388812 kB' 'SwapCached: 0 kB' 'Active: 7292968 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901960 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564888 kB' 'Mapped: 199888 kB' 'Shmem: 6339832 kB' 'KReclaimable: 578224 kB' 'Slab: 1358960 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780736 kB' 'KernelStack: 27296 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8426048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237420 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.212 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108069992 kB' 'MemAvailable: 112658956 kB' 'Buffers: 2704 kB' 'Cached: 11388828 kB' 'SwapCached: 0 kB' 'Active: 7291416 kB' 'Inactive: 4660676 kB' 'Active(anon): 6900408 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563768 kB' 'Mapped: 199772 kB' 'Shmem: 6339848 kB' 'KReclaimable: 578224 kB' 'Slab: 1358960 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780736 kB' 'KernelStack: 27248 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8424588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237452 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.213 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.214 nr_hugepages=1024 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.214 resv_hugepages=0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.214 surplus_hugepages=0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.214 anon_hugepages=0 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.214 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108070056 kB' 'MemAvailable: 112659020 kB' 'Buffers: 2704 kB' 'Cached: 11388828 kB' 'SwapCached: 0 kB' 'Active: 7291984 kB' 'Inactive: 4660676 kB' 'Active(anon): 6900976 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564396 kB' 'Mapped: 199772 kB' 'Shmem: 6339848 kB' 'KReclaimable: 578224 kB' 'Slab: 1358960 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 780736 kB' 'KernelStack: 27472 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8426096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237436 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.215 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58060628 kB' 'MemUsed: 7598380 kB' 'SwapCached: 0 kB' 'Active: 2281252 kB' 'Inactive: 1033508 kB' 'Active(anon): 2067916 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907744 kB' 'Mapped: 63896 kB' 'AnonPages: 410148 kB' 'Shmem: 1660900 kB' 'KernelStack: 14984 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572440 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.216 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.217 node0=1024 expecting 1024 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.217 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.515 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:30.515 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.515 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.515 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108086444 kB' 'MemAvailable: 112675408 kB' 'Buffers: 2704 kB' 'Cached: 11388960 kB' 'SwapCached: 0 kB' 'Active: 7292804 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901796 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565032 kB' 'Mapped: 199856 kB' 'Shmem: 6339980 kB' 'KReclaimable: 578224 kB' 'Slab: 1359548 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781324 kB' 'KernelStack: 27216 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8423872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237404 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.516 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108091076 kB' 'MemAvailable: 112680040 kB' 'Buffers: 2704 kB' 'Cached: 11388964 kB' 'SwapCached: 0 kB' 'Active: 7292048 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901040 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564312 kB' 'Mapped: 199848 kB' 'Shmem: 6339984 kB' 'KReclaimable: 578224 kB' 'Slab: 1359676 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781452 kB' 'KernelStack: 27216 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8423888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237356 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.517 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.518 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108091308 kB' 'MemAvailable: 112680272 kB' 'Buffers: 2704 kB' 'Cached: 11388968 kB' 'SwapCached: 0 kB' 'Active: 7292336 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901328 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564604 kB' 'Mapped: 199848 kB' 'Shmem: 6339988 kB' 'KReclaimable: 578224 kB' 'Slab: 1359660 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781436 kB' 'KernelStack: 27200 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8423912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237356 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.519 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.520 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.521 nr_hugepages=1024 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.521 resv_hugepages=0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.521 surplus_hugepages=0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.521 anon_hugepages=0 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.521 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.783 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 108091372 kB' 'MemAvailable: 112680336 kB' 'Buffers: 2704 kB' 'Cached: 11389004 kB' 'SwapCached: 0 kB' 'Active: 7292088 kB' 'Inactive: 4660676 kB' 'Active(anon): 6901080 kB' 'Inactive(anon): 0 kB' 'Active(file): 391008 kB' 'Inactive(file): 4660676 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564276 kB' 'Mapped: 199848 kB' 'Shmem: 6340024 kB' 'KReclaimable: 578224 kB' 'Slab: 1359660 kB' 'SReclaimable: 578224 kB' 'SUnreclaim: 781436 kB' 'KernelStack: 27200 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8423932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237372 kB' 'VmallocChunk: 0 kB' 'Percpu: 163584 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19326976 kB' 'DirectMap1G: 113246208 kB' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.784 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58063460 kB' 'MemUsed: 7595548 kB' 'SwapCached: 0 kB' 'Active: 2279948 kB' 'Inactive: 1033508 kB' 'Active(anon): 2066612 kB' 'Inactive(anon): 0 kB' 'Active(file): 213336 kB' 'Inactive(file): 1033508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2907852 kB' 'Mapped: 63888 kB' 'AnonPages: 408764 kB' 'Shmem: 1661008 kB' 'KernelStack: 15000 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198088 kB' 'Slab: 572908 kB' 'SReclaimable: 198088 kB' 'SUnreclaim: 374820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.785 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.786 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.787 node0=1024 expecting 1024 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.787 00:04:30.787 real 0m7.878s 00:04:30.787 user 0m3.070s 00:04:30.787 sys 0m4.935s 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.787 13:50:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.787 ************************************ 00:04:30.787 END TEST no_shrink_alloc 00:04:30.787 ************************************ 00:04:30.787 13:50:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.787 13:50:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.787 00:04:30.787 real 0m27.834s 00:04:30.787 user 0m10.670s 00:04:30.787 sys 0m17.399s 00:04:30.787 13:50:28 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.787 13:50:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.787 ************************************ 00:04:30.787 END TEST hugepages 00:04:30.787 ************************************ 00:04:30.787 13:50:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.787 13:50:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:30.787 13:50:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.787 13:50:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.787 13:50:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.787 ************************************ 00:04:30.787 START TEST driver 00:04:30.787 ************************************ 00:04:30.787 13:50:28 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:30.787 * Looking for test storage... 00:04:31.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.047 13:50:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:31.047 13:50:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.047 13:50:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.331 13:50:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.331 13:50:33 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.331 13:50:33 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.331 13:50:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.331 ************************************ 00:04:36.331 START TEST guess_driver 00:04:36.331 ************************************ 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:36.331 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:36.331 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.332 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:36.332 Looking for driver=vfio-pci 00:04:36.332 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.332 13:50:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.332 13:50:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.332 13:50:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.534 13:50:37 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.843 00:04:45.843 real 0m9.041s 00:04:45.843 user 0m2.973s 00:04:45.843 sys 0m5.323s 00:04:45.843 13:50:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.843 13:50:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 ************************************ 00:04:45.844 END TEST guess_driver 00:04:45.844 ************************************ 00:04:45.844 13:50:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:45.844 00:04:45.844 real 0m14.242s 00:04:45.844 user 0m4.425s 00:04:45.844 sys 0m8.307s 00:04:45.844 13:50:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.844 13:50:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 ************************************ 00:04:45.844 END TEST driver 00:04:45.844 ************************************ 00:04:45.844 13:50:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:45.844 13:50:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.844 13:50:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.844 13:50:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.844 13:50:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 ************************************ 00:04:45.844 START TEST devices 00:04:45.844 ************************************ 00:04:45.844 13:50:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.844 * Looking for test storage... 00:04:45.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.844 13:50:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.844 13:50:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.844 13:50:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.844 13:50:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:50.049 13:50:47 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:50.049 No valid GPT data, bailing 00:04:50.049 13:50:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.049 13:50:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:50.049 13:50:47 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.049 13:50:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.049 ************************************ 00:04:50.049 START TEST nvme_mount 00:04:50.049 ************************************ 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.049 13:50:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.310 Creating new GPT entries in memory. 00:04:50.310 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.310 other utilities. 00:04:50.310 13:50:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.310 13:50:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.310 13:50:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.310 13:50:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.310 13:50:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.694 Creating new GPT entries in memory. 00:04:51.694 The operation has completed successfully. 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1120558 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.694 13:50:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.987 13:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.987 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.987 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.987 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.988 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.988 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.988 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.250 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:55.250 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:55.250 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.251 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.251 13:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.457 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.458 13:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.458 13:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.825 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:02.826 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.086 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.086 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.086 13:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.086 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.086 00:05:03.086 real 0m13.546s 00:05:03.086 user 0m4.157s 00:05:03.086 sys 0m7.226s 00:05:03.086 13:51:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.086 13:51:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.086 ************************************ 00:05:03.086 END TEST nvme_mount 00:05:03.086 ************************************ 00:05:03.086 13:51:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:03.086 13:51:00 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.086 13:51:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.086 13:51:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.086 13:51:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.086 ************************************ 00:05:03.086 START TEST dm_mount 00:05:03.086 ************************************ 00:05:03.086 13:51:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.087 13:51:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.028 Creating new GPT entries in memory. 00:05:04.028 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.028 other utilities. 00:05:04.028 13:51:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.028 13:51:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.028 13:51:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.028 13:51:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.028 13:51:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.970 Creating new GPT entries in memory. 00:05:04.970 The operation has completed successfully. 00:05:04.970 13:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.970 13:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.970 13:51:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.970 13:51:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.970 13:51:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:06.354 The operation has completed successfully. 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1126157 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.354 13:51:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.654 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.915 13:51:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:14.120 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:14.120 00:05:14.120 real 0m10.888s 00:05:14.120 user 0m2.952s 00:05:14.120 sys 0m5.021s 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.120 13:51:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:14.120 ************************************ 00:05:14.120 END TEST dm_mount 00:05:14.120 ************************************ 00:05:14.120 13:51:11 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.120 13:51:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.120 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.120 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.120 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.120 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.120 13:51:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:14.120 00:05:14.120 real 0m29.108s 00:05:14.120 user 0m8.825s 00:05:14.120 sys 0m15.090s 00:05:14.381 13:51:12 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.381 13:51:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:14.381 ************************************ 00:05:14.381 END TEST devices 00:05:14.381 ************************************ 00:05:14.381 13:51:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.381 00:05:14.381 real 1m38.366s 00:05:14.381 user 0m32.964s 00:05:14.381 sys 0m56.646s 00:05:14.381 13:51:12 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.381 13:51:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.381 ************************************ 00:05:14.381 END TEST setup.sh 00:05:14.381 ************************************ 00:05:14.381 13:51:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.381 13:51:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.587 Hugepages 00:05:18.587 node hugesize free / total 00:05:18.587 node0 1048576kB 0 / 0 00:05:18.587 node0 2048kB 2048 / 2048 00:05:18.587 node1 1048576kB 0 / 0 00:05:18.587 node1 2048kB 0 / 0 00:05:18.587 00:05:18.587 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.587 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:18.587 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:18.587 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:18.587 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:18.587 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:18.587 13:51:16 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.587 13:51:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.587 13:51:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.587 13:51:16 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.909 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.909 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:22.169 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:24.080 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:24.080 13:51:21 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.022 13:51:22 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.022 13:51:22 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.022 13:51:22 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.022 13:51:22 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.022 13:51:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.022 13:51:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.022 13:51:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.022 13:51:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.022 13:51:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:25.022 13:51:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:25.022 13:51:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:25.022 13:51:22 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.224 Waiting for block devices as requested 00:05:29.224 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:29.224 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:29.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:29.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:29.744 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:29.744 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:29.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:29.744 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:30.006 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:30.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:30.006 13:51:27 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:30.006 13:51:27 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:30.006 13:51:27 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:30.006 13:51:27 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:30.006 13:51:27 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:30.006 13:51:27 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:30.006 13:51:27 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:30.006 13:51:28 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:30.006 13:51:28 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:30.006 13:51:28 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:30.006 13:51:28 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:30.006 13:51:28 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:30.006 13:51:28 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:30.006 13:51:28 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:30.006 13:51:28 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:30.006 13:51:28 -- common/autotest_common.sh@1557 -- # continue 00:05:30.006 13:51:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.006 13:51:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.006 13:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.006 13:51:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.006 13:51:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.006 13:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.006 13:51:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:34.291 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.291 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:34.291 13:51:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:34.291 13:51:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.291 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.291 13:51:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:34.291 13:51:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:34.291 13:51:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.291 13:51:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:34.291 13:51:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:34.291 13:51:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:34.291 13:51:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:34.291 13:51:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:34.291 13:51:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.291 13:51:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.291 13:51:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:34.291 13:51:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:34.291 13:51:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:34.291 13:51:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:34.291 13:51:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:34.291 13:51:32 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:34.291 13:51:32 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:34.291 13:51:32 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:34.291 13:51:32 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:34.291 13:51:32 -- common/autotest_common.sh@1593 -- # return 0 00:05:34.291 13:51:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.292 13:51:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.292 13:51:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.292 13:51:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.292 13:51:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.292 13:51:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.292 13:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.292 13:51:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.292 13:51:32 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.292 13:51:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.292 13:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.292 13:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.292 ************************************ 00:05:34.292 START TEST env 00:05:34.292 ************************************ 00:05:34.292 13:51:32 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.292 * Looking for test storage... 00:05:34.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:34.292 13:51:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.292 13:51:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.292 13:51:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.292 13:51:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.292 ************************************ 00:05:34.292 START TEST env_memory 00:05:34.292 ************************************ 00:05:34.292 13:51:32 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.292 00:05:34.292 00:05:34.292 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.292 http://cunit.sourceforge.net/ 00:05:34.292 00:05:34.292 00:05:34.292 Suite: memory 00:05:34.292 Test: alloc and free memory map ...[2024-07-15 13:51:32.299825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.292 passed 00:05:34.292 Test: mem map translation ...[2024-07-15 13:51:32.327446] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.292 [2024-07-15 13:51:32.327481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.292 [2024-07-15 13:51:32.327529] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.292 [2024-07-15 13:51:32.327536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.292 passed 00:05:34.292 Test: mem map registration ...[2024-07-15 13:51:32.382847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.292 [2024-07-15 13:51:32.382867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.292 passed 00:05:34.553 Test: mem map adjacent registrations ...passed 00:05:34.553 00:05:34.553 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.553 suites 1 1 n/a 0 0 00:05:34.553 tests 4 4 4 0 0 00:05:34.553 asserts 152 152 152 0 n/a 00:05:34.553 00:05:34.553 Elapsed time = 0.198 seconds 00:05:34.553 00:05:34.553 real 0m0.212s 00:05:34.553 user 0m0.200s 00:05:34.553 sys 0m0.011s 00:05:34.553 13:51:32 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.553 13:51:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.553 ************************************ 00:05:34.553 END TEST env_memory 00:05:34.553 ************************************ 00:05:34.553 13:51:32 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.553 13:51:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.553 13:51:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.553 13:51:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.553 13:51:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.553 ************************************ 00:05:34.553 START TEST env_vtophys 00:05:34.553 ************************************ 00:05:34.553 13:51:32 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.553 EAL: lib.eal log level changed from notice to debug 00:05:34.553 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.553 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.553 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.553 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.553 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.553 EAL: Detected lcore 5 as core 5 on socket 0 00:05:34.553 EAL: Detected lcore 6 as core 6 on socket 0 00:05:34.553 EAL: Detected lcore 7 as core 7 on socket 0 00:05:34.553 EAL: Detected lcore 8 as core 8 on socket 0 00:05:34.553 EAL: Detected lcore 9 as core 9 on socket 0 00:05:34.553 EAL: Detected lcore 10 as core 10 on socket 0 00:05:34.553 EAL: Detected lcore 11 as core 11 on socket 0 00:05:34.553 EAL: Detected lcore 12 as core 12 on socket 0 00:05:34.553 EAL: Detected lcore 13 as core 13 on socket 0 00:05:34.553 EAL: Detected lcore 14 as core 14 on socket 0 00:05:34.553 EAL: Detected lcore 15 as core 15 on socket 0 00:05:34.553 EAL: Detected lcore 16 as core 16 on socket 0 00:05:34.553 EAL: Detected lcore 17 as core 17 on socket 0 00:05:34.553 EAL: Detected lcore 18 as core 18 on socket 0 00:05:34.553 EAL: Detected lcore 19 as core 19 on socket 0 00:05:34.553 EAL: Detected lcore 20 as core 20 on socket 0 00:05:34.553 EAL: Detected lcore 21 as core 21 on socket 0 00:05:34.553 EAL: Detected lcore 22 as core 22 on socket 0 00:05:34.553 EAL: Detected lcore 23 as core 23 on socket 0 00:05:34.553 EAL: Detected lcore 24 as core 24 on socket 0 00:05:34.553 EAL: Detected lcore 25 as core 25 on socket 0 00:05:34.553 EAL: Detected lcore 26 as core 26 on socket 0 00:05:34.553 EAL: Detected lcore 27 as core 27 on socket 0 00:05:34.553 EAL: Detected lcore 28 as core 28 on socket 0 00:05:34.553 EAL: Detected lcore 29 as core 29 on socket 0 00:05:34.553 EAL: Detected lcore 30 as core 30 on socket 0 00:05:34.553 EAL: Detected lcore 31 as core 31 on socket 0 00:05:34.553 EAL: Detected lcore 32 as core 32 on socket 0 00:05:34.553 EAL: Detected lcore 33 as core 33 on socket 0 00:05:34.553 EAL: Detected lcore 34 as core 34 on socket 0 00:05:34.553 EAL: Detected lcore 35 as core 35 on socket 0 00:05:34.553 EAL: Detected lcore 36 as core 0 on socket 1 00:05:34.554 EAL: Detected lcore 37 as core 1 on socket 1 00:05:34.554 EAL: Detected lcore 38 as core 2 on socket 1 00:05:34.554 EAL: Detected lcore 39 as core 3 on socket 1 00:05:34.554 EAL: Detected lcore 40 as core 4 on socket 1 00:05:34.554 EAL: Detected lcore 41 as core 5 on socket 1 00:05:34.554 EAL: Detected lcore 42 as core 6 on socket 1 00:05:34.554 EAL: Detected lcore 43 as core 7 on socket 1 00:05:34.554 EAL: Detected lcore 44 as core 8 on socket 1 00:05:34.554 EAL: Detected lcore 45 as core 9 on socket 1 00:05:34.554 EAL: Detected lcore 46 as core 10 on socket 1 00:05:34.554 EAL: Detected lcore 47 as core 11 on socket 1 00:05:34.554 EAL: Detected lcore 48 as core 12 on socket 1 00:05:34.554 EAL: Detected lcore 49 as core 13 on socket 1 00:05:34.554 EAL: Detected lcore 50 as core 14 on socket 1 00:05:34.554 EAL: Detected lcore 51 as core 15 on socket 1 00:05:34.554 EAL: Detected lcore 52 as core 16 on socket 1 00:05:34.554 EAL: Detected lcore 53 as core 17 on socket 1 00:05:34.554 EAL: Detected lcore 54 as core 18 on socket 1 00:05:34.554 EAL: Detected lcore 55 as core 19 on socket 1 00:05:34.554 EAL: Detected lcore 56 as core 20 on socket 1 00:05:34.554 EAL: Detected lcore 57 as core 21 on socket 1 00:05:34.554 EAL: Detected lcore 58 as core 22 on socket 1 00:05:34.554 EAL: Detected lcore 59 as core 23 on socket 1 00:05:34.554 EAL: Detected lcore 60 as core 24 on socket 1 00:05:34.554 EAL: Detected lcore 61 as core 25 on socket 1 00:05:34.554 EAL: Detected lcore 62 as core 26 on socket 1 00:05:34.554 EAL: Detected lcore 63 as core 27 on socket 1 00:05:34.554 EAL: Detected lcore 64 as core 28 on socket 1 00:05:34.554 EAL: Detected lcore 65 as core 29 on socket 1 00:05:34.554 EAL: Detected lcore 66 as core 30 on socket 1 00:05:34.554 EAL: Detected lcore 67 as core 31 on socket 1 00:05:34.554 EAL: Detected lcore 68 as core 32 on socket 1 00:05:34.554 EAL: Detected lcore 69 as core 33 on socket 1 00:05:34.554 EAL: Detected lcore 70 as core 34 on socket 1 00:05:34.554 EAL: Detected lcore 71 as core 35 on socket 1 00:05:34.554 EAL: Detected lcore 72 as core 0 on socket 0 00:05:34.554 EAL: Detected lcore 73 as core 1 on socket 0 00:05:34.554 EAL: Detected lcore 74 as core 2 on socket 0 00:05:34.554 EAL: Detected lcore 75 as core 3 on socket 0 00:05:34.554 EAL: Detected lcore 76 as core 4 on socket 0 00:05:34.554 EAL: Detected lcore 77 as core 5 on socket 0 00:05:34.554 EAL: Detected lcore 78 as core 6 on socket 0 00:05:34.554 EAL: Detected lcore 79 as core 7 on socket 0 00:05:34.554 EAL: Detected lcore 80 as core 8 on socket 0 00:05:34.554 EAL: Detected lcore 81 as core 9 on socket 0 00:05:34.554 EAL: Detected lcore 82 as core 10 on socket 0 00:05:34.554 EAL: Detected lcore 83 as core 11 on socket 0 00:05:34.554 EAL: Detected lcore 84 as core 12 on socket 0 00:05:34.554 EAL: Detected lcore 85 as core 13 on socket 0 00:05:34.554 EAL: Detected lcore 86 as core 14 on socket 0 00:05:34.554 EAL: Detected lcore 87 as core 15 on socket 0 00:05:34.554 EAL: Detected lcore 88 as core 16 on socket 0 00:05:34.554 EAL: Detected lcore 89 as core 17 on socket 0 00:05:34.554 EAL: Detected lcore 90 as core 18 on socket 0 00:05:34.554 EAL: Detected lcore 91 as core 19 on socket 0 00:05:34.554 EAL: Detected lcore 92 as core 20 on socket 0 00:05:34.554 EAL: Detected lcore 93 as core 21 on socket 0 00:05:34.554 EAL: Detected lcore 94 as core 22 on socket 0 00:05:34.554 EAL: Detected lcore 95 as core 23 on socket 0 00:05:34.554 EAL: Detected lcore 96 as core 24 on socket 0 00:05:34.554 EAL: Detected lcore 97 as core 25 on socket 0 00:05:34.554 EAL: Detected lcore 98 as core 26 on socket 0 00:05:34.554 EAL: Detected lcore 99 as core 27 on socket 0 00:05:34.554 EAL: Detected lcore 100 as core 28 on socket 0 00:05:34.554 EAL: Detected lcore 101 as core 29 on socket 0 00:05:34.554 EAL: Detected lcore 102 as core 30 on socket 0 00:05:34.554 EAL: Detected lcore 103 as core 31 on socket 0 00:05:34.554 EAL: Detected lcore 104 as core 32 on socket 0 00:05:34.554 EAL: Detected lcore 105 as core 33 on socket 0 00:05:34.554 EAL: Detected lcore 106 as core 34 on socket 0 00:05:34.554 EAL: Detected lcore 107 as core 35 on socket 0 00:05:34.554 EAL: Detected lcore 108 as core 0 on socket 1 00:05:34.554 EAL: Detected lcore 109 as core 1 on socket 1 00:05:34.554 EAL: Detected lcore 110 as core 2 on socket 1 00:05:34.554 EAL: Detected lcore 111 as core 3 on socket 1 00:05:34.554 EAL: Detected lcore 112 as core 4 on socket 1 00:05:34.554 EAL: Detected lcore 113 as core 5 on socket 1 00:05:34.554 EAL: Detected lcore 114 as core 6 on socket 1 00:05:34.554 EAL: Detected lcore 115 as core 7 on socket 1 00:05:34.554 EAL: Detected lcore 116 as core 8 on socket 1 00:05:34.554 EAL: Detected lcore 117 as core 9 on socket 1 00:05:34.554 EAL: Detected lcore 118 as core 10 on socket 1 00:05:34.554 EAL: Detected lcore 119 as core 11 on socket 1 00:05:34.554 EAL: Detected lcore 120 as core 12 on socket 1 00:05:34.554 EAL: Detected lcore 121 as core 13 on socket 1 00:05:34.554 EAL: Detected lcore 122 as core 14 on socket 1 00:05:34.554 EAL: Detected lcore 123 as core 15 on socket 1 00:05:34.554 EAL: Detected lcore 124 as core 16 on socket 1 00:05:34.554 EAL: Detected lcore 125 as core 17 on socket 1 00:05:34.554 EAL: Detected lcore 126 as core 18 on socket 1 00:05:34.554 EAL: Detected lcore 127 as core 19 on socket 1 00:05:34.554 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:34.554 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:34.554 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:34.554 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:34.554 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:34.554 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:34.554 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:34.554 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:34.554 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:34.554 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:34.554 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:34.554 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:34.554 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:34.554 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:34.554 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:34.554 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:34.554 EAL: Maximum logical cores by configuration: 128 00:05:34.554 EAL: Detected CPU lcores: 128 00:05:34.554 EAL: Detected NUMA nodes: 2 00:05:34.554 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.554 EAL: Detected shared linkage of DPDK 00:05:34.554 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.554 EAL: Bus pci wants IOVA as 'DC' 00:05:34.554 EAL: Buses did not request a specific IOVA mode. 00:05:34.554 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:34.554 EAL: Selected IOVA mode 'VA' 00:05:34.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.554 EAL: Probing VFIO support... 00:05:34.554 EAL: IOMMU type 1 (Type 1) is supported 00:05:34.554 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:34.554 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:34.554 EAL: VFIO support initialized 00:05:34.554 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.554 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.554 EAL: Setting up physically contiguous memory... 00:05:34.554 EAL: Setting maximum number of open files to 524288 00:05:34.554 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.554 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:34.554 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.554 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:34.554 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.554 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:34.554 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.554 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.554 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:34.554 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:34.554 EAL: Hugepages will be freed exactly as allocated. 00:05:34.554 EAL: No shared files mode enabled, IPC is disabled 00:05:34.554 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: TSC frequency is ~2400000 KHz 00:05:34.555 EAL: Main lcore 0 is ready (tid=7f57b3feca00;cpuset=[0]) 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 0 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.555 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.555 00:05:34.555 00:05:34.555 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.555 http://cunit.sourceforge.net/ 00:05:34.555 00:05:34.555 00:05:34.555 Suite: components_suite 00:05:34.555 Test: vtophys_malloc_test ...passed 00:05:34.555 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.555 EAL: Trying to obtain current memory policy. 00:05:34.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.555 EAL: Restoring previous memory policy: 4 00:05:34.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.555 EAL: request: mp_malloc_sync 00:05:34.555 EAL: No shared files mode enabled, IPC is disabled 00:05:34.555 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.816 EAL: Trying to obtain current memory policy. 00:05:34.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.816 EAL: Restoring previous memory policy: 4 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.816 EAL: Trying to obtain current memory policy. 00:05:34.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.816 EAL: Restoring previous memory policy: 4 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.816 EAL: Trying to obtain current memory policy. 00:05:34.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.816 EAL: Restoring previous memory policy: 4 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.816 EAL: request: mp_malloc_sync 00:05:34.816 EAL: No shared files mode enabled, IPC is disabled 00:05:34.816 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.076 EAL: request: mp_malloc_sync 00:05:35.076 EAL: No shared files mode enabled, IPC is disabled 00:05:35.076 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.076 EAL: Trying to obtain current memory policy. 00:05:35.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.076 EAL: Restoring previous memory policy: 4 00:05:35.076 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.076 EAL: request: mp_malloc_sync 00:05:35.076 EAL: No shared files mode enabled, IPC is disabled 00:05:35.076 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.336 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.336 EAL: request: mp_malloc_sync 00:05:35.336 EAL: No shared files mode enabled, IPC is disabled 00:05:35.336 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.336 passed 00:05:35.336 00:05:35.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.336 suites 1 1 n/a 0 0 00:05:35.336 tests 2 2 2 0 0 00:05:35.336 asserts 497 497 497 0 n/a 00:05:35.336 00:05:35.336 Elapsed time = 0.656 seconds 00:05:35.336 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.336 EAL: request: mp_malloc_sync 00:05:35.336 EAL: No shared files mode enabled, IPC is disabled 00:05:35.336 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.336 EAL: No shared files mode enabled, IPC is disabled 00:05:35.336 EAL: No shared files mode enabled, IPC is disabled 00:05:35.336 EAL: No shared files mode enabled, IPC is disabled 00:05:35.336 00:05:35.336 real 0m0.783s 00:05:35.336 user 0m0.398s 00:05:35.336 sys 0m0.361s 00:05:35.336 13:51:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.336 13:51:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.336 ************************************ 00:05:35.336 END TEST env_vtophys 00:05:35.336 ************************************ 00:05:35.336 13:51:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.336 13:51:33 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.336 13:51:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.336 13:51:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.336 13:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.336 ************************************ 00:05:35.336 START TEST env_pci 00:05:35.336 ************************************ 00:05:35.336 13:51:33 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.336 00:05:35.336 00:05:35.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.336 http://cunit.sourceforge.net/ 00:05:35.336 00:05:35.336 00:05:35.336 Suite: pci 00:05:35.336 Test: pci_hook ...[2024-07-15 13:51:33.414243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1138765 has claimed it 00:05:35.336 EAL: Cannot find device (10000:00:01.0) 00:05:35.336 EAL: Failed to attach device on primary process 00:05:35.336 passed 00:05:35.336 00:05:35.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.336 suites 1 1 n/a 0 0 00:05:35.336 tests 1 1 1 0 0 00:05:35.336 asserts 25 25 25 0 n/a 00:05:35.336 00:05:35.336 Elapsed time = 0.032 seconds 00:05:35.596 00:05:35.596 real 0m0.053s 00:05:35.596 user 0m0.014s 00:05:35.596 sys 0m0.038s 00:05:35.596 13:51:33 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.596 13:51:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.596 ************************************ 00:05:35.596 END TEST env_pci 00:05:35.596 ************************************ 00:05:35.596 13:51:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.596 13:51:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.596 13:51:33 env -- env/env.sh@15 -- # uname 00:05:35.596 13:51:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.596 13:51:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.596 13:51:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.596 13:51:33 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:35.596 13:51:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.596 13:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.596 ************************************ 00:05:35.596 START TEST env_dpdk_post_init 00:05:35.596 ************************************ 00:05:35.596 13:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.596 EAL: Detected CPU lcores: 128 00:05:35.596 EAL: Detected NUMA nodes: 2 00:05:35.596 EAL: Detected shared linkage of DPDK 00:05:35.596 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.596 EAL: Selected IOVA mode 'VA' 00:05:35.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.596 EAL: VFIO support initialized 00:05:35.596 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.596 EAL: Using IOMMU type 1 (Type 1) 00:05:35.858 EAL: Ignore mapping IO port bar(1) 00:05:35.858 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:36.118 EAL: Ignore mapping IO port bar(1) 00:05:36.118 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:36.118 EAL: Ignore mapping IO port bar(1) 00:05:36.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:36.377 EAL: Ignore mapping IO port bar(1) 00:05:36.636 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:36.636 EAL: Ignore mapping IO port bar(1) 00:05:36.895 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:36.895 EAL: Ignore mapping IO port bar(1) 00:05:36.895 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:37.155 EAL: Ignore mapping IO port bar(1) 00:05:37.155 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:37.416 EAL: Ignore mapping IO port bar(1) 00:05:37.416 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:37.677 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:37.937 EAL: Ignore mapping IO port bar(1) 00:05:37.937 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:37.937 EAL: Ignore mapping IO port bar(1) 00:05:38.197 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:38.197 EAL: Ignore mapping IO port bar(1) 00:05:38.457 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:38.457 EAL: Ignore mapping IO port bar(1) 00:05:38.457 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:38.718 EAL: Ignore mapping IO port bar(1) 00:05:38.718 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:38.978 EAL: Ignore mapping IO port bar(1) 00:05:38.979 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:39.239 EAL: Ignore mapping IO port bar(1) 00:05:39.239 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:39.500 EAL: Ignore mapping IO port bar(1) 00:05:39.500 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:39.500 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:39.500 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:39.500 Starting DPDK initialization... 00:05:39.500 Starting SPDK post initialization... 00:05:39.500 SPDK NVMe probe 00:05:39.500 Attaching to 0000:65:00.0 00:05:39.500 Attached to 0000:65:00.0 00:05:39.500 Cleaning up... 00:05:41.412 00:05:41.412 real 0m5.724s 00:05:41.412 user 0m0.195s 00:05:41.412 sys 0m0.071s 00:05:41.412 13:51:39 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.412 13:51:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.412 ************************************ 00:05:41.412 END TEST env_dpdk_post_init 00:05:41.412 ************************************ 00:05:41.412 13:51:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.412 13:51:39 env -- env/env.sh@26 -- # uname 00:05:41.412 13:51:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.412 13:51:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.412 13:51:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.412 13:51:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.412 13:51:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.412 ************************************ 00:05:41.412 START TEST env_mem_callbacks 00:05:41.412 ************************************ 00:05:41.412 13:51:39 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.412 EAL: Detected CPU lcores: 128 00:05:41.412 EAL: Detected NUMA nodes: 2 00:05:41.412 EAL: Detected shared linkage of DPDK 00:05:41.412 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.412 EAL: Selected IOVA mode 'VA' 00:05:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.412 EAL: VFIO support initialized 00:05:41.412 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.412 00:05:41.412 00:05:41.412 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.412 http://cunit.sourceforge.net/ 00:05:41.412 00:05:41.412 00:05:41.412 Suite: memory 00:05:41.412 Test: test ... 00:05:41.412 register 0x200000200000 2097152 00:05:41.412 malloc 3145728 00:05:41.412 register 0x200000400000 4194304 00:05:41.412 buf 0x200000500000 len 3145728 PASSED 00:05:41.412 malloc 64 00:05:41.412 buf 0x2000004fff40 len 64 PASSED 00:05:41.412 malloc 4194304 00:05:41.412 register 0x200000800000 6291456 00:05:41.412 buf 0x200000a00000 len 4194304 PASSED 00:05:41.412 free 0x200000500000 3145728 00:05:41.412 free 0x2000004fff40 64 00:05:41.412 unregister 0x200000400000 4194304 PASSED 00:05:41.412 free 0x200000a00000 4194304 00:05:41.412 unregister 0x200000800000 6291456 PASSED 00:05:41.413 malloc 8388608 00:05:41.413 register 0x200000400000 10485760 00:05:41.413 buf 0x200000600000 len 8388608 PASSED 00:05:41.413 free 0x200000600000 8388608 00:05:41.413 unregister 0x200000400000 10485760 PASSED 00:05:41.413 passed 00:05:41.413 00:05:41.413 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.413 suites 1 1 n/a 0 0 00:05:41.413 tests 1 1 1 0 0 00:05:41.413 asserts 15 15 15 0 n/a 00:05:41.413 00:05:41.413 Elapsed time = 0.007 seconds 00:05:41.413 00:05:41.413 real 0m0.067s 00:05:41.413 user 0m0.022s 00:05:41.413 sys 0m0.046s 00:05:41.413 13:51:39 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.413 13:51:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.413 ************************************ 00:05:41.413 END TEST env_mem_callbacks 00:05:41.413 ************************************ 00:05:41.413 13:51:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.413 00:05:41.413 real 0m7.326s 00:05:41.413 user 0m1.007s 00:05:41.413 sys 0m0.860s 00:05:41.413 13:51:39 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.413 13:51:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.413 ************************************ 00:05:41.413 END TEST env 00:05:41.413 ************************************ 00:05:41.413 13:51:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.413 13:51:39 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.413 13:51:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.413 13:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.413 13:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.673 ************************************ 00:05:41.673 START TEST rpc 00:05:41.673 ************************************ 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.673 * Looking for test storage... 00:05:41.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.673 13:51:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1140186 00:05:41.673 13:51:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.673 13:51:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.673 13:51:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1140186 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@829 -- # '[' -z 1140186 ']' 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.673 13:51:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.673 [2024-07-15 13:51:39.688189] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:41.673 [2024-07-15 13:51:39.688261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140186 ] 00:05:41.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.673 [2024-07-15 13:51:39.761978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.934 [2024-07-15 13:51:39.835154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.934 [2024-07-15 13:51:39.835196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1140186' to capture a snapshot of events at runtime. 00:05:41.934 [2024-07-15 13:51:39.835204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.934 [2024-07-15 13:51:39.835210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.934 [2024-07-15 13:51:39.835216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1140186 for offline analysis/debug. 00:05:41.934 [2024-07-15 13:51:39.835244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.506 13:51:40 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.506 13:51:40 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.506 13:51:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.506 13:51:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.506 13:51:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.506 13:51:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.506 13:51:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.506 13:51:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.506 13:51:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 START TEST rpc_integrity 00:05:42.506 ************************************ 00:05:42.506 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.506 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.506 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.506 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.507 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.507 { 00:05:42.507 "name": "Malloc0", 00:05:42.507 "aliases": [ 00:05:42.507 "31fba09f-bbe5-4588-9cbf-04b96b2ca23d" 00:05:42.507 ], 00:05:42.507 "product_name": "Malloc disk", 00:05:42.507 "block_size": 512, 00:05:42.507 "num_blocks": 16384, 00:05:42.507 "uuid": "31fba09f-bbe5-4588-9cbf-04b96b2ca23d", 00:05:42.507 "assigned_rate_limits": { 00:05:42.507 "rw_ios_per_sec": 0, 00:05:42.507 "rw_mbytes_per_sec": 0, 00:05:42.507 "r_mbytes_per_sec": 0, 00:05:42.507 "w_mbytes_per_sec": 0 00:05:42.507 }, 00:05:42.507 "claimed": false, 00:05:42.507 "zoned": false, 00:05:42.507 "supported_io_types": { 00:05:42.507 "read": true, 00:05:42.507 "write": true, 00:05:42.507 "unmap": true, 00:05:42.507 "flush": true, 00:05:42.507 "reset": true, 00:05:42.507 "nvme_admin": false, 00:05:42.507 "nvme_io": false, 00:05:42.507 "nvme_io_md": false, 00:05:42.507 "write_zeroes": true, 00:05:42.507 "zcopy": true, 00:05:42.507 "get_zone_info": false, 00:05:42.507 "zone_management": false, 00:05:42.507 "zone_append": false, 00:05:42.507 "compare": false, 00:05:42.507 "compare_and_write": false, 00:05:42.507 "abort": true, 00:05:42.507 "seek_hole": false, 00:05:42.507 "seek_data": false, 00:05:42.507 "copy": true, 00:05:42.507 "nvme_iov_md": false 00:05:42.507 }, 00:05:42.507 "memory_domains": [ 00:05:42.507 { 00:05:42.507 "dma_device_id": "system", 00:05:42.507 "dma_device_type": 1 00:05:42.507 }, 00:05:42.507 { 00:05:42.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.507 "dma_device_type": 2 00:05:42.507 } 00:05:42.507 ], 00:05:42.507 "driver_specific": {} 00:05:42.507 } 00:05:42.507 ]' 00:05:42.507 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.769 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.769 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.769 [2024-07-15 13:51:40.636172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.769 [2024-07-15 13:51:40.636207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.769 [2024-07-15 13:51:40.636219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1671a10 00:05:42.769 [2024-07-15 13:51:40.636226] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.769 [2024-07-15 13:51:40.637572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.769 [2024-07-15 13:51:40.637593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.769 Passthru0 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.769 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.769 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.769 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.769 { 00:05:42.769 "name": "Malloc0", 00:05:42.769 "aliases": [ 00:05:42.769 "31fba09f-bbe5-4588-9cbf-04b96b2ca23d" 00:05:42.769 ], 00:05:42.769 "product_name": "Malloc disk", 00:05:42.769 "block_size": 512, 00:05:42.769 "num_blocks": 16384, 00:05:42.769 "uuid": "31fba09f-bbe5-4588-9cbf-04b96b2ca23d", 00:05:42.769 "assigned_rate_limits": { 00:05:42.769 "rw_ios_per_sec": 0, 00:05:42.769 "rw_mbytes_per_sec": 0, 00:05:42.769 "r_mbytes_per_sec": 0, 00:05:42.769 "w_mbytes_per_sec": 0 00:05:42.769 }, 00:05:42.769 "claimed": true, 00:05:42.769 "claim_type": "exclusive_write", 00:05:42.769 "zoned": false, 00:05:42.769 "supported_io_types": { 00:05:42.769 "read": true, 00:05:42.769 "write": true, 00:05:42.769 "unmap": true, 00:05:42.769 "flush": true, 00:05:42.769 "reset": true, 00:05:42.769 "nvme_admin": false, 00:05:42.769 "nvme_io": false, 00:05:42.769 "nvme_io_md": false, 00:05:42.769 "write_zeroes": true, 00:05:42.769 "zcopy": true, 00:05:42.769 "get_zone_info": false, 00:05:42.769 "zone_management": false, 00:05:42.769 "zone_append": false, 00:05:42.769 "compare": false, 00:05:42.769 "compare_and_write": false, 00:05:42.769 "abort": true, 00:05:42.769 "seek_hole": false, 00:05:42.769 "seek_data": false, 00:05:42.769 "copy": true, 00:05:42.769 "nvme_iov_md": false 00:05:42.769 }, 00:05:42.769 "memory_domains": [ 00:05:42.769 { 00:05:42.770 "dma_device_id": "system", 00:05:42.770 "dma_device_type": 1 00:05:42.770 }, 00:05:42.770 { 00:05:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.770 "dma_device_type": 2 00:05:42.770 } 00:05:42.770 ], 00:05:42.770 "driver_specific": {} 00:05:42.770 }, 00:05:42.770 { 00:05:42.770 "name": "Passthru0", 00:05:42.770 "aliases": [ 00:05:42.770 "9417c619-f1fa-5113-a656-07bda336bad5" 00:05:42.770 ], 00:05:42.770 "product_name": "passthru", 00:05:42.770 "block_size": 512, 00:05:42.770 "num_blocks": 16384, 00:05:42.770 "uuid": "9417c619-f1fa-5113-a656-07bda336bad5", 00:05:42.770 "assigned_rate_limits": { 00:05:42.770 "rw_ios_per_sec": 0, 00:05:42.770 "rw_mbytes_per_sec": 0, 00:05:42.770 "r_mbytes_per_sec": 0, 00:05:42.770 "w_mbytes_per_sec": 0 00:05:42.770 }, 00:05:42.770 "claimed": false, 00:05:42.770 "zoned": false, 00:05:42.770 "supported_io_types": { 00:05:42.770 "read": true, 00:05:42.770 "write": true, 00:05:42.770 "unmap": true, 00:05:42.770 "flush": true, 00:05:42.770 "reset": true, 00:05:42.770 "nvme_admin": false, 00:05:42.770 "nvme_io": false, 00:05:42.770 "nvme_io_md": false, 00:05:42.770 "write_zeroes": true, 00:05:42.770 "zcopy": true, 00:05:42.770 "get_zone_info": false, 00:05:42.770 "zone_management": false, 00:05:42.770 "zone_append": false, 00:05:42.770 "compare": false, 00:05:42.770 "compare_and_write": false, 00:05:42.770 "abort": true, 00:05:42.770 "seek_hole": false, 00:05:42.770 "seek_data": false, 00:05:42.770 "copy": true, 00:05:42.770 "nvme_iov_md": false 00:05:42.770 }, 00:05:42.770 "memory_domains": [ 00:05:42.770 { 00:05:42.770 "dma_device_id": "system", 00:05:42.770 "dma_device_type": 1 00:05:42.770 }, 00:05:42.770 { 00:05:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.770 "dma_device_type": 2 00:05:42.770 } 00:05:42.770 ], 00:05:42.770 "driver_specific": { 00:05:42.770 "passthru": { 00:05:42.770 "name": "Passthru0", 00:05:42.770 "base_bdev_name": "Malloc0" 00:05:42.770 } 00:05:42.770 } 00:05:42.770 } 00:05:42.770 ]' 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.770 13:51:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.770 00:05:42.770 real 0m0.289s 00:05:42.770 user 0m0.191s 00:05:42.770 sys 0m0.033s 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.770 13:51:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 ************************************ 00:05:42.770 END TEST rpc_integrity 00:05:42.770 ************************************ 00:05:42.770 13:51:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.770 13:51:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.770 13:51:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.770 13:51:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.770 13:51:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 ************************************ 00:05:42.770 START TEST rpc_plugins 00:05:42.770 ************************************ 00:05:42.770 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:42.770 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.770 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.770 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.032 { 00:05:43.032 "name": "Malloc1", 00:05:43.032 "aliases": [ 00:05:43.032 "93eb4f3e-34da-42c8-aef2-2f3ee2841dc6" 00:05:43.032 ], 00:05:43.032 "product_name": "Malloc disk", 00:05:43.032 "block_size": 4096, 00:05:43.032 "num_blocks": 256, 00:05:43.032 "uuid": "93eb4f3e-34da-42c8-aef2-2f3ee2841dc6", 00:05:43.032 "assigned_rate_limits": { 00:05:43.032 "rw_ios_per_sec": 0, 00:05:43.032 "rw_mbytes_per_sec": 0, 00:05:43.032 "r_mbytes_per_sec": 0, 00:05:43.032 "w_mbytes_per_sec": 0 00:05:43.032 }, 00:05:43.032 "claimed": false, 00:05:43.032 "zoned": false, 00:05:43.032 "supported_io_types": { 00:05:43.032 "read": true, 00:05:43.032 "write": true, 00:05:43.032 "unmap": true, 00:05:43.032 "flush": true, 00:05:43.032 "reset": true, 00:05:43.032 "nvme_admin": false, 00:05:43.032 "nvme_io": false, 00:05:43.032 "nvme_io_md": false, 00:05:43.032 "write_zeroes": true, 00:05:43.032 "zcopy": true, 00:05:43.032 "get_zone_info": false, 00:05:43.032 "zone_management": false, 00:05:43.032 "zone_append": false, 00:05:43.032 "compare": false, 00:05:43.032 "compare_and_write": false, 00:05:43.032 "abort": true, 00:05:43.032 "seek_hole": false, 00:05:43.032 "seek_data": false, 00:05:43.032 "copy": true, 00:05:43.032 "nvme_iov_md": false 00:05:43.032 }, 00:05:43.032 "memory_domains": [ 00:05:43.032 { 00:05:43.032 "dma_device_id": "system", 00:05:43.032 "dma_device_type": 1 00:05:43.032 }, 00:05:43.032 { 00:05:43.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.032 "dma_device_type": 2 00:05:43.032 } 00:05:43.032 ], 00:05:43.032 "driver_specific": {} 00:05:43.032 } 00:05:43.032 ]' 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 13:51:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.032 13:51:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:43.032 13:51:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.032 00:05:43.032 real 0m0.149s 00:05:43.032 user 0m0.097s 00:05:43.032 sys 0m0.018s 00:05:43.032 13:51:41 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.032 13:51:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 ************************************ 00:05:43.032 END TEST rpc_plugins 00:05:43.032 ************************************ 00:05:43.032 13:51:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.032 13:51:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.032 13:51:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.032 13:51:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.032 13:51:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 ************************************ 00:05:43.032 START TEST rpc_trace_cmd_test 00:05:43.032 ************************************ 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.032 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1140186", 00:05:43.032 "tpoint_group_mask": "0x8", 00:05:43.032 "iscsi_conn": { 00:05:43.032 "mask": "0x2", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "scsi": { 00:05:43.032 "mask": "0x4", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "bdev": { 00:05:43.032 "mask": "0x8", 00:05:43.032 "tpoint_mask": "0xffffffffffffffff" 00:05:43.032 }, 00:05:43.032 "nvmf_rdma": { 00:05:43.032 "mask": "0x10", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "nvmf_tcp": { 00:05:43.032 "mask": "0x20", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "ftl": { 00:05:43.032 "mask": "0x40", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "blobfs": { 00:05:43.032 "mask": "0x80", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "dsa": { 00:05:43.032 "mask": "0x200", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "thread": { 00:05:43.032 "mask": "0x400", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "nvme_pcie": { 00:05:43.032 "mask": "0x800", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "iaa": { 00:05:43.032 "mask": "0x1000", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "nvme_tcp": { 00:05:43.032 "mask": "0x2000", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "bdev_nvme": { 00:05:43.032 "mask": "0x4000", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 }, 00:05:43.032 "sock": { 00:05:43.032 "mask": "0x8000", 00:05:43.032 "tpoint_mask": "0x0" 00:05:43.032 } 00:05:43.032 }' 00:05:43.032 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.293 00:05:43.293 real 0m0.241s 00:05:43.293 user 0m0.198s 00:05:43.293 sys 0m0.035s 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.293 13:51:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.293 ************************************ 00:05:43.293 END TEST rpc_trace_cmd_test 00:05:43.293 ************************************ 00:05:43.293 13:51:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.293 13:51:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.293 13:51:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.293 13:51:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.293 13:51:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.293 13:51:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.293 13:51:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 ************************************ 00:05:43.554 START TEST rpc_daemon_integrity 00:05:43.554 ************************************ 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.554 { 00:05:43.554 "name": "Malloc2", 00:05:43.554 "aliases": [ 00:05:43.554 "1f0e616b-cc03-4981-a50f-df6d7eb5124f" 00:05:43.554 ], 00:05:43.554 "product_name": "Malloc disk", 00:05:43.554 "block_size": 512, 00:05:43.554 "num_blocks": 16384, 00:05:43.554 "uuid": "1f0e616b-cc03-4981-a50f-df6d7eb5124f", 00:05:43.554 "assigned_rate_limits": { 00:05:43.554 "rw_ios_per_sec": 0, 00:05:43.554 "rw_mbytes_per_sec": 0, 00:05:43.554 "r_mbytes_per_sec": 0, 00:05:43.554 "w_mbytes_per_sec": 0 00:05:43.554 }, 00:05:43.554 "claimed": false, 00:05:43.554 "zoned": false, 00:05:43.554 "supported_io_types": { 00:05:43.554 "read": true, 00:05:43.554 "write": true, 00:05:43.554 "unmap": true, 00:05:43.554 "flush": true, 00:05:43.554 "reset": true, 00:05:43.554 "nvme_admin": false, 00:05:43.554 "nvme_io": false, 00:05:43.554 "nvme_io_md": false, 00:05:43.554 "write_zeroes": true, 00:05:43.554 "zcopy": true, 00:05:43.554 "get_zone_info": false, 00:05:43.554 "zone_management": false, 00:05:43.554 "zone_append": false, 00:05:43.554 "compare": false, 00:05:43.554 "compare_and_write": false, 00:05:43.554 "abort": true, 00:05:43.554 "seek_hole": false, 00:05:43.554 "seek_data": false, 00:05:43.554 "copy": true, 00:05:43.554 "nvme_iov_md": false 00:05:43.554 }, 00:05:43.554 "memory_domains": [ 00:05:43.554 { 00:05:43.554 "dma_device_id": "system", 00:05:43.554 "dma_device_type": 1 00:05:43.554 }, 00:05:43.554 { 00:05:43.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.554 "dma_device_type": 2 00:05:43.554 } 00:05:43.554 ], 00:05:43.554 "driver_specific": {} 00:05:43.554 } 00:05:43.554 ]' 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 [2024-07-15 13:51:41.550670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.554 [2024-07-15 13:51:41.550702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.554 [2024-07-15 13:51:41.550717] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1808fe0 00:05:43.554 [2024-07-15 13:51:41.550725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.554 [2024-07-15 13:51:41.551943] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.554 [2024-07-15 13:51:41.551963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.554 Passthru0 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.554 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.554 { 00:05:43.554 "name": "Malloc2", 00:05:43.554 "aliases": [ 00:05:43.554 "1f0e616b-cc03-4981-a50f-df6d7eb5124f" 00:05:43.554 ], 00:05:43.554 "product_name": "Malloc disk", 00:05:43.554 "block_size": 512, 00:05:43.554 "num_blocks": 16384, 00:05:43.554 "uuid": "1f0e616b-cc03-4981-a50f-df6d7eb5124f", 00:05:43.554 "assigned_rate_limits": { 00:05:43.554 "rw_ios_per_sec": 0, 00:05:43.554 "rw_mbytes_per_sec": 0, 00:05:43.554 "r_mbytes_per_sec": 0, 00:05:43.554 "w_mbytes_per_sec": 0 00:05:43.554 }, 00:05:43.554 "claimed": true, 00:05:43.554 "claim_type": "exclusive_write", 00:05:43.554 "zoned": false, 00:05:43.554 "supported_io_types": { 00:05:43.554 "read": true, 00:05:43.554 "write": true, 00:05:43.554 "unmap": true, 00:05:43.554 "flush": true, 00:05:43.554 "reset": true, 00:05:43.554 "nvme_admin": false, 00:05:43.554 "nvme_io": false, 00:05:43.554 "nvme_io_md": false, 00:05:43.554 "write_zeroes": true, 00:05:43.554 "zcopy": true, 00:05:43.554 "get_zone_info": false, 00:05:43.555 "zone_management": false, 00:05:43.555 "zone_append": false, 00:05:43.555 "compare": false, 00:05:43.555 "compare_and_write": false, 00:05:43.555 "abort": true, 00:05:43.555 "seek_hole": false, 00:05:43.555 "seek_data": false, 00:05:43.555 "copy": true, 00:05:43.555 "nvme_iov_md": false 00:05:43.555 }, 00:05:43.555 "memory_domains": [ 00:05:43.555 { 00:05:43.555 "dma_device_id": "system", 00:05:43.555 "dma_device_type": 1 00:05:43.555 }, 00:05:43.555 { 00:05:43.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.555 "dma_device_type": 2 00:05:43.555 } 00:05:43.555 ], 00:05:43.555 "driver_specific": {} 00:05:43.555 }, 00:05:43.555 { 00:05:43.555 "name": "Passthru0", 00:05:43.555 "aliases": [ 00:05:43.555 "51411ed9-5136-5012-a186-06ec7f7237ea" 00:05:43.555 ], 00:05:43.555 "product_name": "passthru", 00:05:43.555 "block_size": 512, 00:05:43.555 "num_blocks": 16384, 00:05:43.555 "uuid": "51411ed9-5136-5012-a186-06ec7f7237ea", 00:05:43.555 "assigned_rate_limits": { 00:05:43.555 "rw_ios_per_sec": 0, 00:05:43.555 "rw_mbytes_per_sec": 0, 00:05:43.555 "r_mbytes_per_sec": 0, 00:05:43.555 "w_mbytes_per_sec": 0 00:05:43.555 }, 00:05:43.555 "claimed": false, 00:05:43.555 "zoned": false, 00:05:43.555 "supported_io_types": { 00:05:43.555 "read": true, 00:05:43.555 "write": true, 00:05:43.555 "unmap": true, 00:05:43.555 "flush": true, 00:05:43.555 "reset": true, 00:05:43.555 "nvme_admin": false, 00:05:43.555 "nvme_io": false, 00:05:43.555 "nvme_io_md": false, 00:05:43.555 "write_zeroes": true, 00:05:43.555 "zcopy": true, 00:05:43.555 "get_zone_info": false, 00:05:43.555 "zone_management": false, 00:05:43.555 "zone_append": false, 00:05:43.555 "compare": false, 00:05:43.555 "compare_and_write": false, 00:05:43.555 "abort": true, 00:05:43.555 "seek_hole": false, 00:05:43.555 "seek_data": false, 00:05:43.555 "copy": true, 00:05:43.555 "nvme_iov_md": false 00:05:43.555 }, 00:05:43.555 "memory_domains": [ 00:05:43.555 { 00:05:43.555 "dma_device_id": "system", 00:05:43.555 "dma_device_type": 1 00:05:43.555 }, 00:05:43.555 { 00:05:43.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.555 "dma_device_type": 2 00:05:43.555 } 00:05:43.555 ], 00:05:43.555 "driver_specific": { 00:05:43.555 "passthru": { 00:05:43.555 "name": "Passthru0", 00:05:43.555 "base_bdev_name": "Malloc2" 00:05:43.555 } 00:05:43.555 } 00:05:43.555 } 00:05:43.555 ]' 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.555 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.816 13:51:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.816 00:05:43.816 real 0m0.294s 00:05:43.816 user 0m0.183s 00:05:43.816 sys 0m0.044s 00:05:43.816 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.816 13:51:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 ************************************ 00:05:43.816 END TEST rpc_daemon_integrity 00:05:43.816 ************************************ 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.816 13:51:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.816 13:51:41 rpc -- rpc/rpc.sh@84 -- # killprocess 1140186 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@948 -- # '[' -z 1140186 ']' 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@952 -- # kill -0 1140186 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140186 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140186' 00:05:43.816 killing process with pid 1140186 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@967 -- # kill 1140186 00:05:43.816 13:51:41 rpc -- common/autotest_common.sh@972 -- # wait 1140186 00:05:44.076 00:05:44.076 real 0m2.479s 00:05:44.076 user 0m3.278s 00:05:44.076 sys 0m0.688s 00:05:44.076 13:51:42 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.076 13:51:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.076 ************************************ 00:05:44.076 END TEST rpc 00:05:44.076 ************************************ 00:05:44.076 13:51:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.076 13:51:42 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.076 13:51:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.076 13:51:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.076 13:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.076 ************************************ 00:05:44.076 START TEST skip_rpc 00:05:44.076 ************************************ 00:05:44.076 13:51:42 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.076 * Looking for test storage... 00:05:44.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.076 13:51:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.076 13:51:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:44.076 13:51:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:44.076 13:51:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.076 13:51:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.076 13:51:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.337 ************************************ 00:05:44.337 START TEST skip_rpc 00:05:44.337 ************************************ 00:05:44.337 13:51:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:44.337 13:51:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1140736 00:05:44.337 13:51:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.337 13:51:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:44.337 13:51:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:44.337 [2024-07-15 13:51:42.267579] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:44.337 [2024-07-15 13:51:42.267646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140736 ] 00:05:44.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.337 [2024-07-15 13:51:42.337642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.337 [2024-07-15 13:51:42.411833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1140736 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1140736 ']' 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1140736 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140736 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140736' 00:05:49.621 killing process with pid 1140736 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1140736 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1140736 00:05:49.621 00:05:49.621 real 0m5.275s 00:05:49.621 user 0m5.074s 00:05:49.621 sys 0m0.230s 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.621 13:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.621 ************************************ 00:05:49.621 END TEST skip_rpc 00:05:49.621 ************************************ 00:05:49.621 13:51:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.621 13:51:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.621 13:51:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.621 13:51:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.621 13:51:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.621 ************************************ 00:05:49.621 START TEST skip_rpc_with_json 00:05:49.622 ************************************ 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1141842 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1141842 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1141842 ']' 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.622 13:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.622 [2024-07-15 13:51:47.621902] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.622 [2024-07-15 13:51:47.621956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141842 ] 00:05:49.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.622 [2024-07-15 13:51:47.691594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.881 [2024-07-15 13:51:47.760501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.451 [2024-07-15 13:51:48.396424] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.451 request: 00:05:50.451 { 00:05:50.451 "trtype": "tcp", 00:05:50.451 "method": "nvmf_get_transports", 00:05:50.451 "req_id": 1 00:05:50.451 } 00:05:50.451 Got JSON-RPC error response 00:05:50.451 response: 00:05:50.451 { 00:05:50.451 "code": -19, 00:05:50.451 "message": "No such device" 00:05:50.451 } 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:50.451 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.452 [2024-07-15 13:51:48.408542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.452 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.712 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.712 { 00:05:50.712 "subsystems": [ 00:05:50.712 { 00:05:50.712 "subsystem": "vfio_user_target", 00:05:50.712 "config": null 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "keyring", 00:05:50.712 "config": [] 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "iobuf", 00:05:50.712 "config": [ 00:05:50.712 { 00:05:50.712 "method": "iobuf_set_options", 00:05:50.712 "params": { 00:05:50.712 "small_pool_count": 8192, 00:05:50.712 "large_pool_count": 1024, 00:05:50.712 "small_bufsize": 8192, 00:05:50.712 "large_bufsize": 135168 00:05:50.712 } 00:05:50.712 } 00:05:50.712 ] 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "sock", 00:05:50.712 "config": [ 00:05:50.712 { 00:05:50.712 "method": "sock_set_default_impl", 00:05:50.712 "params": { 00:05:50.712 "impl_name": "posix" 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "sock_impl_set_options", 00:05:50.712 "params": { 00:05:50.712 "impl_name": "ssl", 00:05:50.712 "recv_buf_size": 4096, 00:05:50.712 "send_buf_size": 4096, 00:05:50.712 "enable_recv_pipe": true, 00:05:50.712 "enable_quickack": false, 00:05:50.712 "enable_placement_id": 0, 00:05:50.712 "enable_zerocopy_send_server": true, 00:05:50.712 "enable_zerocopy_send_client": false, 00:05:50.712 "zerocopy_threshold": 0, 00:05:50.712 "tls_version": 0, 00:05:50.712 "enable_ktls": false 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "sock_impl_set_options", 00:05:50.712 "params": { 00:05:50.712 "impl_name": "posix", 00:05:50.712 "recv_buf_size": 2097152, 00:05:50.712 "send_buf_size": 2097152, 00:05:50.712 "enable_recv_pipe": true, 00:05:50.712 "enable_quickack": false, 00:05:50.712 "enable_placement_id": 0, 00:05:50.712 "enable_zerocopy_send_server": true, 00:05:50.712 "enable_zerocopy_send_client": false, 00:05:50.712 "zerocopy_threshold": 0, 00:05:50.712 "tls_version": 0, 00:05:50.712 "enable_ktls": false 00:05:50.712 } 00:05:50.712 } 00:05:50.712 ] 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "vmd", 00:05:50.712 "config": [] 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "accel", 00:05:50.712 "config": [ 00:05:50.712 { 00:05:50.712 "method": "accel_set_options", 00:05:50.712 "params": { 00:05:50.712 "small_cache_size": 128, 00:05:50.712 "large_cache_size": 16, 00:05:50.712 "task_count": 2048, 00:05:50.712 "sequence_count": 2048, 00:05:50.712 "buf_count": 2048 00:05:50.712 } 00:05:50.712 } 00:05:50.712 ] 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "subsystem": "bdev", 00:05:50.712 "config": [ 00:05:50.712 { 00:05:50.712 "method": "bdev_set_options", 00:05:50.712 "params": { 00:05:50.712 "bdev_io_pool_size": 65535, 00:05:50.712 "bdev_io_cache_size": 256, 00:05:50.712 "bdev_auto_examine": true, 00:05:50.712 "iobuf_small_cache_size": 128, 00:05:50.712 "iobuf_large_cache_size": 16 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "bdev_raid_set_options", 00:05:50.712 "params": { 00:05:50.712 "process_window_size_kb": 1024 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "bdev_iscsi_set_options", 00:05:50.712 "params": { 00:05:50.712 "timeout_sec": 30 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "bdev_nvme_set_options", 00:05:50.712 "params": { 00:05:50.712 "action_on_timeout": "none", 00:05:50.712 "timeout_us": 0, 00:05:50.712 "timeout_admin_us": 0, 00:05:50.712 "keep_alive_timeout_ms": 10000, 00:05:50.712 "arbitration_burst": 0, 00:05:50.712 "low_priority_weight": 0, 00:05:50.712 "medium_priority_weight": 0, 00:05:50.712 "high_priority_weight": 0, 00:05:50.712 "nvme_adminq_poll_period_us": 10000, 00:05:50.712 "nvme_ioq_poll_period_us": 0, 00:05:50.712 "io_queue_requests": 0, 00:05:50.712 "delay_cmd_submit": true, 00:05:50.712 "transport_retry_count": 4, 00:05:50.712 "bdev_retry_count": 3, 00:05:50.712 "transport_ack_timeout": 0, 00:05:50.712 "ctrlr_loss_timeout_sec": 0, 00:05:50.712 "reconnect_delay_sec": 0, 00:05:50.712 "fast_io_fail_timeout_sec": 0, 00:05:50.712 "disable_auto_failback": false, 00:05:50.712 "generate_uuids": false, 00:05:50.712 "transport_tos": 0, 00:05:50.712 "nvme_error_stat": false, 00:05:50.712 "rdma_srq_size": 0, 00:05:50.712 "io_path_stat": false, 00:05:50.712 "allow_accel_sequence": false, 00:05:50.712 "rdma_max_cq_size": 0, 00:05:50.712 "rdma_cm_event_timeout_ms": 0, 00:05:50.712 "dhchap_digests": [ 00:05:50.712 "sha256", 00:05:50.712 "sha384", 00:05:50.712 "sha512" 00:05:50.712 ], 00:05:50.712 "dhchap_dhgroups": [ 00:05:50.712 "null", 00:05:50.712 "ffdhe2048", 00:05:50.712 "ffdhe3072", 00:05:50.712 "ffdhe4096", 00:05:50.712 "ffdhe6144", 00:05:50.712 "ffdhe8192" 00:05:50.712 ] 00:05:50.712 } 00:05:50.712 }, 00:05:50.712 { 00:05:50.712 "method": "bdev_nvme_set_hotplug", 00:05:50.712 "params": { 00:05:50.713 "period_us": 100000, 00:05:50.713 "enable": false 00:05:50.713 } 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "method": "bdev_wait_for_examine" 00:05:50.713 } 00:05:50.713 ] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "scsi", 00:05:50.713 "config": null 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "scheduler", 00:05:50.713 "config": [ 00:05:50.713 { 00:05:50.713 "method": "framework_set_scheduler", 00:05:50.713 "params": { 00:05:50.713 "name": "static" 00:05:50.713 } 00:05:50.713 } 00:05:50.713 ] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "vhost_scsi", 00:05:50.713 "config": [] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "vhost_blk", 00:05:50.713 "config": [] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "ublk", 00:05:50.713 "config": [] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "nbd", 00:05:50.713 "config": [] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "nvmf", 00:05:50.713 "config": [ 00:05:50.713 { 00:05:50.713 "method": "nvmf_set_config", 00:05:50.713 "params": { 00:05:50.713 "discovery_filter": "match_any", 00:05:50.713 "admin_cmd_passthru": { 00:05:50.713 "identify_ctrlr": false 00:05:50.713 } 00:05:50.713 } 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "method": "nvmf_set_max_subsystems", 00:05:50.713 "params": { 00:05:50.713 "max_subsystems": 1024 00:05:50.713 } 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "method": "nvmf_set_crdt", 00:05:50.713 "params": { 00:05:50.713 "crdt1": 0, 00:05:50.713 "crdt2": 0, 00:05:50.713 "crdt3": 0 00:05:50.713 } 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "method": "nvmf_create_transport", 00:05:50.713 "params": { 00:05:50.713 "trtype": "TCP", 00:05:50.713 "max_queue_depth": 128, 00:05:50.713 "max_io_qpairs_per_ctrlr": 127, 00:05:50.713 "in_capsule_data_size": 4096, 00:05:50.713 "max_io_size": 131072, 00:05:50.713 "io_unit_size": 131072, 00:05:50.713 "max_aq_depth": 128, 00:05:50.713 "num_shared_buffers": 511, 00:05:50.713 "buf_cache_size": 4294967295, 00:05:50.713 "dif_insert_or_strip": false, 00:05:50.713 "zcopy": false, 00:05:50.713 "c2h_success": true, 00:05:50.713 "sock_priority": 0, 00:05:50.713 "abort_timeout_sec": 1, 00:05:50.713 "ack_timeout": 0, 00:05:50.713 "data_wr_pool_size": 0 00:05:50.713 } 00:05:50.713 } 00:05:50.713 ] 00:05:50.713 }, 00:05:50.713 { 00:05:50.713 "subsystem": "iscsi", 00:05:50.713 "config": [ 00:05:50.713 { 00:05:50.713 "method": "iscsi_set_options", 00:05:50.713 "params": { 00:05:50.713 "node_base": "iqn.2016-06.io.spdk", 00:05:50.713 "max_sessions": 128, 00:05:50.713 "max_connections_per_session": 2, 00:05:50.713 "max_queue_depth": 64, 00:05:50.713 "default_time2wait": 2, 00:05:50.713 "default_time2retain": 20, 00:05:50.713 "first_burst_length": 8192, 00:05:50.713 "immediate_data": true, 00:05:50.713 "allow_duplicated_isid": false, 00:05:50.713 "error_recovery_level": 0, 00:05:50.713 "nop_timeout": 60, 00:05:50.713 "nop_in_interval": 30, 00:05:50.713 "disable_chap": false, 00:05:50.713 "require_chap": false, 00:05:50.713 "mutual_chap": false, 00:05:50.713 "chap_group": 0, 00:05:50.713 "max_large_datain_per_connection": 64, 00:05:50.713 "max_r2t_per_connection": 4, 00:05:50.713 "pdu_pool_size": 36864, 00:05:50.713 "immediate_data_pool_size": 16384, 00:05:50.713 "data_out_pool_size": 2048 00:05:50.713 } 00:05:50.713 } 00:05:50.713 ] 00:05:50.713 } 00:05:50.713 ] 00:05:50.713 } 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1141842 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1141842 ']' 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1141842 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1141842 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1141842' 00:05:50.713 killing process with pid 1141842 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1141842 00:05:50.713 13:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1141842 00:05:50.974 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1142113 00:05:50.974 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.974 13:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1142113 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1142113 ']' 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1142113 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142113 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142113' 00:05:56.258 killing process with pid 1142113 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1142113 00:05:56.258 13:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1142113 00:05:56.258 13:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.258 13:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.258 00:05:56.258 real 0m6.551s 00:05:56.258 user 0m6.438s 00:05:56.259 sys 0m0.526s 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.259 ************************************ 00:05:56.259 END TEST skip_rpc_with_json 00:05:56.259 ************************************ 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.259 13:51:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.259 ************************************ 00:05:56.259 START TEST skip_rpc_with_delay 00:05:56.259 ************************************ 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.259 [2024-07-15 13:51:54.249952] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.259 [2024-07-15 13:51:54.250044] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.259 00:05:56.259 real 0m0.076s 00:05:56.259 user 0m0.044s 00:05:56.259 sys 0m0.031s 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.259 13:51:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:56.259 ************************************ 00:05:56.259 END TEST skip_rpc_with_delay 00:05:56.259 ************************************ 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.259 13:51:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.259 13:51:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.259 13:51:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.259 13:51:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.259 ************************************ 00:05:56.259 START TEST exit_on_failed_rpc_init 00:05:56.259 ************************************ 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1143287 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1143287 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1143287 ']' 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.259 13:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.520 [2024-07-15 13:51:54.402401] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.520 [2024-07-15 13:51:54.402458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143287 ] 00:05:56.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.520 [2024-07-15 13:51:54.476048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.520 [2024-07-15 13:51:54.550981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.090 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.351 [2024-07-15 13:51:55.249021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:57.351 [2024-07-15 13:51:55.249074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143504 ] 00:05:57.351 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.351 [2024-07-15 13:51:55.330645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.351 [2024-07-15 13:51:55.394511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.351 [2024-07-15 13:51:55.394573] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.351 [2024-07-15 13:51:55.394582] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.351 [2024-07-15 13:51:55.394589] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1143287 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1143287 ']' 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1143287 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.351 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143287 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143287' 00:05:57.612 killing process with pid 1143287 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1143287 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1143287 00:05:57.612 00:05:57.612 real 0m1.371s 00:05:57.612 user 0m1.616s 00:05:57.612 sys 0m0.378s 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.612 13:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.612 ************************************ 00:05:57.612 END TEST exit_on_failed_rpc_init 00:05:57.612 ************************************ 00:05:57.872 13:51:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.872 13:51:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.872 00:05:57.872 real 0m13.679s 00:05:57.872 user 0m13.329s 00:05:57.872 sys 0m1.432s 00:05:57.872 13:51:55 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.872 13:51:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 ************************************ 00:05:57.872 END TEST skip_rpc 00:05:57.872 ************************************ 00:05:57.872 13:51:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.872 13:51:55 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.872 13:51:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.872 13:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.872 13:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 ************************************ 00:05:57.872 START TEST rpc_client 00:05:57.872 ************************************ 00:05:57.872 13:51:55 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.872 * Looking for test storage... 00:05:57.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:57.872 13:51:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:57.872 OK 00:05:57.872 13:51:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.872 00:05:57.872 real 0m0.126s 00:05:57.872 user 0m0.055s 00:05:57.872 sys 0m0.078s 00:05:57.872 13:51:55 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.872 13:51:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 ************************************ 00:05:57.872 END TEST rpc_client 00:05:57.872 ************************************ 00:05:58.134 13:51:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.134 13:51:56 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.134 13:51:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.134 13:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.134 13:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.134 ************************************ 00:05:58.134 START TEST json_config 00:05:58.134 ************************************ 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.134 13:51:56 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.134 13:51:56 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.134 13:51:56 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.134 13:51:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.134 13:51:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.134 13:51:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.134 13:51:56 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.134 13:51:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@47 -- # : 0 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.134 13:51:56 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:58.134 INFO: JSON configuration test init 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.134 13:51:56 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.134 13:51:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.134 13:51:56 json_config -- json_config/common.sh@10 -- # shift 00:05:58.134 13:51:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.134 13:51:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.134 13:51:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.134 13:51:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.134 13:51:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.134 13:51:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1143800 00:05:58.134 13:51:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.134 Waiting for target to run... 00:05:58.134 13:51:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1143800 /var/tmp/spdk_tgt.sock 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 1143800 ']' 00:05:58.134 13:51:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.134 13:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.134 [2024-07-15 13:51:56.218979] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.134 [2024-07-15 13:51:56.219036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143800 ] 00:05:58.405 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.721 [2024-07-15 13:51:56.515847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.721 [2024-07-15 13:51:56.568059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:59.009 13:51:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:59.009 00:05:59.009 13:51:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:59.009 13:51:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 13:51:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:59.009 13:51:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.009 13:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 13:51:57 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.009 13:51:57 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:59.009 13:51:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:59.580 13:51:57 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:59.580 13:51:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:59.580 13:51:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.580 13:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.580 13:51:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:59.580 13:51:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:59.581 13:51:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:59.581 13:51:57 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:59.581 13:51:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:59.581 13:51:57 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:59.841 13:51:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.841 13:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:59.841 13:51:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.841 13:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.841 13:51:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.841 MallocForNvmf0 00:05:59.841 13:51:57 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.841 13:51:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:00.102 MallocForNvmf1 00:06:00.102 13:51:58 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:00.102 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:00.362 [2024-07-15 13:51:58.226930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.362 13:51:58 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:00.362 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:00.362 13:51:58 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:00.362 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:00.622 13:51:58 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.622 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.622 13:51:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.622 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.883 [2024-07-15 13:51:58.824897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.883 13:51:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:00.883 13:51:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.883 13:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.883 13:51:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:00.883 13:51:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.883 13:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.883 13:51:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:00.883 13:51:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.883 13:51:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.143 MallocBdevForConfigChangeCheck 00:06:01.143 13:51:59 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:01.143 13:51:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.143 13:51:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.143 13:51:59 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:01.143 13:51:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.404 13:51:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:01.404 INFO: shutting down applications... 00:06:01.404 13:51:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:01.404 13:51:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:01.404 13:51:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:01.404 13:51:59 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:01.974 Calling clear_iscsi_subsystem 00:06:01.974 Calling clear_nvmf_subsystem 00:06:01.974 Calling clear_nbd_subsystem 00:06:01.974 Calling clear_ublk_subsystem 00:06:01.974 Calling clear_vhost_blk_subsystem 00:06:01.974 Calling clear_vhost_scsi_subsystem 00:06:01.974 Calling clear_bdev_subsystem 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:01.974 13:51:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:02.234 13:52:00 json_config -- json_config/json_config.sh@345 -- # break 00:06:02.234 13:52:00 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:02.234 13:52:00 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:02.234 13:52:00 json_config -- json_config/common.sh@31 -- # local app=target 00:06:02.234 13:52:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.234 13:52:00 json_config -- json_config/common.sh@35 -- # [[ -n 1143800 ]] 00:06:02.234 13:52:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1143800 00:06:02.234 13:52:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.234 13:52:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.234 13:52:00 json_config -- json_config/common.sh@41 -- # kill -0 1143800 00:06:02.234 13:52:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.806 13:52:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.806 13:52:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.806 13:52:00 json_config -- json_config/common.sh@41 -- # kill -0 1143800 00:06:02.806 13:52:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.806 13:52:00 json_config -- json_config/common.sh@43 -- # break 00:06:02.806 13:52:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.806 13:52:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.806 SPDK target shutdown done 00:06:02.806 13:52:00 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:02.806 INFO: relaunching applications... 00:06:02.806 13:52:00 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.806 13:52:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.806 13:52:00 json_config -- json_config/common.sh@10 -- # shift 00:06:02.806 13:52:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.806 13:52:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.806 13:52:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.806 13:52:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.806 13:52:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.806 13:52:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1144765 00:06:02.806 13:52:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.806 Waiting for target to run... 00:06:02.806 13:52:00 json_config -- json_config/common.sh@25 -- # waitforlisten 1144765 /var/tmp/spdk_tgt.sock 00:06:02.806 13:52:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 1144765 ']' 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.806 13:52:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.806 [2024-07-15 13:52:00.766117] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:02.806 [2024-07-15 13:52:00.766182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144765 ] 00:06:02.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.066 [2024-07-15 13:52:01.132571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.326 [2024-07-15 13:52:01.185692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.587 [2024-07-15 13:52:01.688769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.846 [2024-07-15 13:52:01.721125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.846 13:52:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.846 13:52:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:03.846 13:52:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.846 00:06:03.846 13:52:01 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:03.846 13:52:01 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:03.846 INFO: Checking if target configuration is the same... 00:06:03.846 13:52:01 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.846 13:52:01 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:03.846 13:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.846 + '[' 2 -ne 2 ']' 00:06:03.846 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.846 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:03.846 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.846 +++ basename /dev/fd/62 00:06:03.846 ++ mktemp /tmp/62.XXX 00:06:03.846 + tmp_file_1=/tmp/62.o7F 00:06:03.846 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.846 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.846 + tmp_file_2=/tmp/spdk_tgt_config.json.5qq 00:06:03.846 + ret=0 00:06:03.846 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.106 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.106 + diff -u /tmp/62.o7F /tmp/spdk_tgt_config.json.5qq 00:06:04.106 + echo 'INFO: JSON config files are the same' 00:06:04.106 INFO: JSON config files are the same 00:06:04.106 + rm /tmp/62.o7F /tmp/spdk_tgt_config.json.5qq 00:06:04.106 + exit 0 00:06:04.106 13:52:02 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:04.106 13:52:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:04.106 INFO: changing configuration and checking if this can be detected... 00:06:04.106 13:52:02 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.106 13:52:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.365 13:52:02 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:04.365 13:52:02 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.365 13:52:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.365 + '[' 2 -ne 2 ']' 00:06:04.365 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:04.365 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:04.365 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:04.365 +++ basename /dev/fd/62 00:06:04.365 ++ mktemp /tmp/62.XXX 00:06:04.365 + tmp_file_1=/tmp/62.Hd0 00:06:04.365 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.365 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.365 + tmp_file_2=/tmp/spdk_tgt_config.json.MI4 00:06:04.365 + ret=0 00:06:04.365 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.625 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.625 + diff -u /tmp/62.Hd0 /tmp/spdk_tgt_config.json.MI4 00:06:04.625 + ret=1 00:06:04.625 + echo '=== Start of file: /tmp/62.Hd0 ===' 00:06:04.625 + cat /tmp/62.Hd0 00:06:04.625 + echo '=== End of file: /tmp/62.Hd0 ===' 00:06:04.625 + echo '' 00:06:04.625 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MI4 ===' 00:06:04.625 + cat /tmp/spdk_tgt_config.json.MI4 00:06:04.625 + echo '=== End of file: /tmp/spdk_tgt_config.json.MI4 ===' 00:06:04.625 + echo '' 00:06:04.625 + rm /tmp/62.Hd0 /tmp/spdk_tgt_config.json.MI4 00:06:04.625 + exit 1 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:04.625 INFO: configuration change detected. 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@317 -- # [[ -n 1144765 ]] 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.625 13:52:02 json_config -- json_config/json_config.sh@323 -- # killprocess 1144765 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@948 -- # '[' -z 1144765 ']' 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@952 -- # kill -0 1144765 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@953 -- # uname 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.625 13:52:02 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144765 00:06:04.885 13:52:02 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.885 13:52:02 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.885 13:52:02 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144765' 00:06:04.885 killing process with pid 1144765 00:06:04.885 13:52:02 json_config -- common/autotest_common.sh@967 -- # kill 1144765 00:06:04.885 13:52:02 json_config -- common/autotest_common.sh@972 -- # wait 1144765 00:06:05.145 13:52:03 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.145 13:52:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:05.145 13:52:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.146 13:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.146 13:52:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:05.146 13:52:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:05.146 INFO: Success 00:06:05.146 00:06:05.146 real 0m7.048s 00:06:05.146 user 0m8.384s 00:06:05.146 sys 0m1.794s 00:06:05.146 13:52:03 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.146 13:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.146 ************************************ 00:06:05.146 END TEST json_config 00:06:05.146 ************************************ 00:06:05.146 13:52:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.146 13:52:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:05.146 13:52:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.146 13:52:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.146 13:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.146 ************************************ 00:06:05.146 START TEST json_config_extra_key 00:06:05.146 ************************************ 00:06:05.146 13:52:03 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:05.146 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.146 13:52:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.146 13:52:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.146 13:52:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.146 13:52:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.146 13:52:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.146 13:52:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.146 13:52:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:05.146 13:52:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.146 13:52:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.146 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:05.406 INFO: launching applications... 00:06:05.406 13:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1145536 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.406 Waiting for target to run... 00:06:05.406 13:52:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1145536 /var/tmp/spdk_tgt.sock 00:06:05.406 13:52:03 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1145536 ']' 00:06:05.406 13:52:03 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.407 13:52:03 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.407 13:52:03 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.407 13:52:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:05.407 13:52:03 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.407 13:52:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.407 [2024-07-15 13:52:03.322048] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:05.407 [2024-07-15 13:52:03.322118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145536 ] 00:06:05.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.666 [2024-07-15 13:52:03.707775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.667 [2024-07-15 13:52:03.759554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.237 13:52:04 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.237 13:52:04 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:06.237 00:06:06.237 13:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:06.237 INFO: shutting down applications... 00:06:06.237 13:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1145536 ]] 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1145536 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1145536 00:06:06.237 13:52:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1145536 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.497 13:52:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.497 SPDK target shutdown done 00:06:06.497 13:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:06.497 Success 00:06:06.497 00:06:06.497 real 0m1.428s 00:06:06.497 user 0m0.966s 00:06:06.497 sys 0m0.456s 00:06:06.497 13:52:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.497 13:52:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.497 ************************************ 00:06:06.497 END TEST json_config_extra_key 00:06:06.497 ************************************ 00:06:06.758 13:52:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.758 13:52:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.758 13:52:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.758 13:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.758 13:52:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.758 ************************************ 00:06:06.758 START TEST alias_rpc 00:06:06.758 ************************************ 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.758 * Looking for test storage... 00:06:06.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:06.758 13:52:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.758 13:52:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1145921 00:06:06.758 13:52:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1145921 00:06:06.758 13:52:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1145921 ']' 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.758 13:52:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.758 [2024-07-15 13:52:04.837581] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:06.758 [2024-07-15 13:52:04.837644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145921 ] 00:06:06.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.018 [2024-07-15 13:52:04.910772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.018 [2024-07-15 13:52:04.984746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.589 13:52:05 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.589 13:52:05 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.589 13:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:07.849 13:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1145921 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1145921 ']' 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1145921 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145921 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145921' 00:06:07.849 killing process with pid 1145921 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@967 -- # kill 1145921 00:06:07.849 13:52:05 alias_rpc -- common/autotest_common.sh@972 -- # wait 1145921 00:06:08.109 00:06:08.109 real 0m1.396s 00:06:08.109 user 0m1.511s 00:06:08.109 sys 0m0.410s 00:06:08.109 13:52:06 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.109 13:52:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.109 ************************************ 00:06:08.109 END TEST alias_rpc 00:06:08.109 ************************************ 00:06:08.109 13:52:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.109 13:52:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:08.109 13:52:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.109 13:52:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.109 13:52:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.109 13:52:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.109 ************************************ 00:06:08.109 START TEST spdkcli_tcp 00:06:08.109 ************************************ 00:06:08.109 13:52:06 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.368 * Looking for test storage... 00:06:08.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1146203 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1146203 00:06:08.368 13:52:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1146203 ']' 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.368 13:52:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.368 [2024-07-15 13:52:06.306484] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.368 [2024-07-15 13:52:06.306550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146203 ] 00:06:08.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.368 [2024-07-15 13:52:06.380224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.369 [2024-07-15 13:52:06.455181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.369 [2024-07-15 13:52:06.455183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.307 13:52:07 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.307 13:52:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:09.307 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.307 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1146323 00:06:09.307 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.307 [ 00:06:09.307 "bdev_malloc_delete", 00:06:09.307 "bdev_malloc_create", 00:06:09.307 "bdev_null_resize", 00:06:09.307 "bdev_null_delete", 00:06:09.307 "bdev_null_create", 00:06:09.307 "bdev_nvme_cuse_unregister", 00:06:09.307 "bdev_nvme_cuse_register", 00:06:09.307 "bdev_opal_new_user", 00:06:09.307 "bdev_opal_set_lock_state", 00:06:09.307 "bdev_opal_delete", 00:06:09.307 "bdev_opal_get_info", 00:06:09.307 "bdev_opal_create", 00:06:09.307 "bdev_nvme_opal_revert", 00:06:09.307 "bdev_nvme_opal_init", 00:06:09.307 "bdev_nvme_send_cmd", 00:06:09.307 "bdev_nvme_get_path_iostat", 00:06:09.307 "bdev_nvme_get_mdns_discovery_info", 00:06:09.307 "bdev_nvme_stop_mdns_discovery", 00:06:09.307 "bdev_nvme_start_mdns_discovery", 00:06:09.307 "bdev_nvme_set_multipath_policy", 00:06:09.307 "bdev_nvme_set_preferred_path", 00:06:09.307 "bdev_nvme_get_io_paths", 00:06:09.307 "bdev_nvme_remove_error_injection", 00:06:09.307 "bdev_nvme_add_error_injection", 00:06:09.307 "bdev_nvme_get_discovery_info", 00:06:09.307 "bdev_nvme_stop_discovery", 00:06:09.307 "bdev_nvme_start_discovery", 00:06:09.307 "bdev_nvme_get_controller_health_info", 00:06:09.307 "bdev_nvme_disable_controller", 00:06:09.307 "bdev_nvme_enable_controller", 00:06:09.307 "bdev_nvme_reset_controller", 00:06:09.307 "bdev_nvme_get_transport_statistics", 00:06:09.307 "bdev_nvme_apply_firmware", 00:06:09.307 "bdev_nvme_detach_controller", 00:06:09.307 "bdev_nvme_get_controllers", 00:06:09.307 "bdev_nvme_attach_controller", 00:06:09.307 "bdev_nvme_set_hotplug", 00:06:09.307 "bdev_nvme_set_options", 00:06:09.307 "bdev_passthru_delete", 00:06:09.307 "bdev_passthru_create", 00:06:09.307 "bdev_lvol_set_parent_bdev", 00:06:09.307 "bdev_lvol_set_parent", 00:06:09.307 "bdev_lvol_check_shallow_copy", 00:06:09.307 "bdev_lvol_start_shallow_copy", 00:06:09.307 "bdev_lvol_grow_lvstore", 00:06:09.307 "bdev_lvol_get_lvols", 00:06:09.308 "bdev_lvol_get_lvstores", 00:06:09.308 "bdev_lvol_delete", 00:06:09.308 "bdev_lvol_set_read_only", 00:06:09.308 "bdev_lvol_resize", 00:06:09.308 "bdev_lvol_decouple_parent", 00:06:09.308 "bdev_lvol_inflate", 00:06:09.308 "bdev_lvol_rename", 00:06:09.308 "bdev_lvol_clone_bdev", 00:06:09.308 "bdev_lvol_clone", 00:06:09.308 "bdev_lvol_snapshot", 00:06:09.308 "bdev_lvol_create", 00:06:09.308 "bdev_lvol_delete_lvstore", 00:06:09.308 "bdev_lvol_rename_lvstore", 00:06:09.308 "bdev_lvol_create_lvstore", 00:06:09.308 "bdev_raid_set_options", 00:06:09.308 "bdev_raid_remove_base_bdev", 00:06:09.308 "bdev_raid_add_base_bdev", 00:06:09.308 "bdev_raid_delete", 00:06:09.308 "bdev_raid_create", 00:06:09.308 "bdev_raid_get_bdevs", 00:06:09.308 "bdev_error_inject_error", 00:06:09.308 "bdev_error_delete", 00:06:09.308 "bdev_error_create", 00:06:09.308 "bdev_split_delete", 00:06:09.308 "bdev_split_create", 00:06:09.308 "bdev_delay_delete", 00:06:09.308 "bdev_delay_create", 00:06:09.308 "bdev_delay_update_latency", 00:06:09.308 "bdev_zone_block_delete", 00:06:09.308 "bdev_zone_block_create", 00:06:09.308 "blobfs_create", 00:06:09.308 "blobfs_detect", 00:06:09.308 "blobfs_set_cache_size", 00:06:09.308 "bdev_aio_delete", 00:06:09.308 "bdev_aio_rescan", 00:06:09.308 "bdev_aio_create", 00:06:09.308 "bdev_ftl_set_property", 00:06:09.308 "bdev_ftl_get_properties", 00:06:09.308 "bdev_ftl_get_stats", 00:06:09.308 "bdev_ftl_unmap", 00:06:09.308 "bdev_ftl_unload", 00:06:09.308 "bdev_ftl_delete", 00:06:09.308 "bdev_ftl_load", 00:06:09.308 "bdev_ftl_create", 00:06:09.308 "bdev_virtio_attach_controller", 00:06:09.308 "bdev_virtio_scsi_get_devices", 00:06:09.308 "bdev_virtio_detach_controller", 00:06:09.308 "bdev_virtio_blk_set_hotplug", 00:06:09.308 "bdev_iscsi_delete", 00:06:09.308 "bdev_iscsi_create", 00:06:09.308 "bdev_iscsi_set_options", 00:06:09.308 "accel_error_inject_error", 00:06:09.308 "ioat_scan_accel_module", 00:06:09.308 "dsa_scan_accel_module", 00:06:09.308 "iaa_scan_accel_module", 00:06:09.308 "vfu_virtio_create_scsi_endpoint", 00:06:09.308 "vfu_virtio_scsi_remove_target", 00:06:09.308 "vfu_virtio_scsi_add_target", 00:06:09.308 "vfu_virtio_create_blk_endpoint", 00:06:09.308 "vfu_virtio_delete_endpoint", 00:06:09.308 "keyring_file_remove_key", 00:06:09.308 "keyring_file_add_key", 00:06:09.308 "keyring_linux_set_options", 00:06:09.308 "iscsi_get_histogram", 00:06:09.308 "iscsi_enable_histogram", 00:06:09.308 "iscsi_set_options", 00:06:09.308 "iscsi_get_auth_groups", 00:06:09.308 "iscsi_auth_group_remove_secret", 00:06:09.308 "iscsi_auth_group_add_secret", 00:06:09.308 "iscsi_delete_auth_group", 00:06:09.308 "iscsi_create_auth_group", 00:06:09.308 "iscsi_set_discovery_auth", 00:06:09.308 "iscsi_get_options", 00:06:09.308 "iscsi_target_node_request_logout", 00:06:09.308 "iscsi_target_node_set_redirect", 00:06:09.308 "iscsi_target_node_set_auth", 00:06:09.308 "iscsi_target_node_add_lun", 00:06:09.308 "iscsi_get_stats", 00:06:09.308 "iscsi_get_connections", 00:06:09.308 "iscsi_portal_group_set_auth", 00:06:09.308 "iscsi_start_portal_group", 00:06:09.308 "iscsi_delete_portal_group", 00:06:09.308 "iscsi_create_portal_group", 00:06:09.308 "iscsi_get_portal_groups", 00:06:09.308 "iscsi_delete_target_node", 00:06:09.308 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.308 "iscsi_target_node_add_pg_ig_maps", 00:06:09.308 "iscsi_create_target_node", 00:06:09.308 "iscsi_get_target_nodes", 00:06:09.308 "iscsi_delete_initiator_group", 00:06:09.308 "iscsi_initiator_group_remove_initiators", 00:06:09.308 "iscsi_initiator_group_add_initiators", 00:06:09.308 "iscsi_create_initiator_group", 00:06:09.308 "iscsi_get_initiator_groups", 00:06:09.308 "nvmf_set_crdt", 00:06:09.308 "nvmf_set_config", 00:06:09.308 "nvmf_set_max_subsystems", 00:06:09.308 "nvmf_stop_mdns_prr", 00:06:09.308 "nvmf_publish_mdns_prr", 00:06:09.308 "nvmf_subsystem_get_listeners", 00:06:09.308 "nvmf_subsystem_get_qpairs", 00:06:09.308 "nvmf_subsystem_get_controllers", 00:06:09.308 "nvmf_get_stats", 00:06:09.308 "nvmf_get_transports", 00:06:09.308 "nvmf_create_transport", 00:06:09.308 "nvmf_get_targets", 00:06:09.308 "nvmf_delete_target", 00:06:09.308 "nvmf_create_target", 00:06:09.308 "nvmf_subsystem_allow_any_host", 00:06:09.308 "nvmf_subsystem_remove_host", 00:06:09.308 "nvmf_subsystem_add_host", 00:06:09.308 "nvmf_ns_remove_host", 00:06:09.308 "nvmf_ns_add_host", 00:06:09.308 "nvmf_subsystem_remove_ns", 00:06:09.308 "nvmf_subsystem_add_ns", 00:06:09.308 "nvmf_subsystem_listener_set_ana_state", 00:06:09.308 "nvmf_discovery_get_referrals", 00:06:09.308 "nvmf_discovery_remove_referral", 00:06:09.308 "nvmf_discovery_add_referral", 00:06:09.308 "nvmf_subsystem_remove_listener", 00:06:09.308 "nvmf_subsystem_add_listener", 00:06:09.308 "nvmf_delete_subsystem", 00:06:09.308 "nvmf_create_subsystem", 00:06:09.308 "nvmf_get_subsystems", 00:06:09.308 "env_dpdk_get_mem_stats", 00:06:09.308 "nbd_get_disks", 00:06:09.308 "nbd_stop_disk", 00:06:09.308 "nbd_start_disk", 00:06:09.308 "ublk_recover_disk", 00:06:09.308 "ublk_get_disks", 00:06:09.308 "ublk_stop_disk", 00:06:09.308 "ublk_start_disk", 00:06:09.308 "ublk_destroy_target", 00:06:09.308 "ublk_create_target", 00:06:09.308 "virtio_blk_create_transport", 00:06:09.308 "virtio_blk_get_transports", 00:06:09.308 "vhost_controller_set_coalescing", 00:06:09.308 "vhost_get_controllers", 00:06:09.308 "vhost_delete_controller", 00:06:09.308 "vhost_create_blk_controller", 00:06:09.308 "vhost_scsi_controller_remove_target", 00:06:09.308 "vhost_scsi_controller_add_target", 00:06:09.308 "vhost_start_scsi_controller", 00:06:09.308 "vhost_create_scsi_controller", 00:06:09.308 "thread_set_cpumask", 00:06:09.308 "framework_get_governor", 00:06:09.308 "framework_get_scheduler", 00:06:09.308 "framework_set_scheduler", 00:06:09.308 "framework_get_reactors", 00:06:09.308 "thread_get_io_channels", 00:06:09.308 "thread_get_pollers", 00:06:09.308 "thread_get_stats", 00:06:09.308 "framework_monitor_context_switch", 00:06:09.308 "spdk_kill_instance", 00:06:09.308 "log_enable_timestamps", 00:06:09.308 "log_get_flags", 00:06:09.308 "log_clear_flag", 00:06:09.308 "log_set_flag", 00:06:09.308 "log_get_level", 00:06:09.308 "log_set_level", 00:06:09.308 "log_get_print_level", 00:06:09.308 "log_set_print_level", 00:06:09.308 "framework_enable_cpumask_locks", 00:06:09.308 "framework_disable_cpumask_locks", 00:06:09.308 "framework_wait_init", 00:06:09.308 "framework_start_init", 00:06:09.308 "scsi_get_devices", 00:06:09.308 "bdev_get_histogram", 00:06:09.308 "bdev_enable_histogram", 00:06:09.308 "bdev_set_qos_limit", 00:06:09.308 "bdev_set_qd_sampling_period", 00:06:09.308 "bdev_get_bdevs", 00:06:09.308 "bdev_reset_iostat", 00:06:09.308 "bdev_get_iostat", 00:06:09.308 "bdev_examine", 00:06:09.308 "bdev_wait_for_examine", 00:06:09.308 "bdev_set_options", 00:06:09.308 "notify_get_notifications", 00:06:09.308 "notify_get_types", 00:06:09.308 "accel_get_stats", 00:06:09.308 "accel_set_options", 00:06:09.308 "accel_set_driver", 00:06:09.308 "accel_crypto_key_destroy", 00:06:09.308 "accel_crypto_keys_get", 00:06:09.308 "accel_crypto_key_create", 00:06:09.308 "accel_assign_opc", 00:06:09.308 "accel_get_module_info", 00:06:09.308 "accel_get_opc_assignments", 00:06:09.308 "vmd_rescan", 00:06:09.308 "vmd_remove_device", 00:06:09.308 "vmd_enable", 00:06:09.308 "sock_get_default_impl", 00:06:09.308 "sock_set_default_impl", 00:06:09.308 "sock_impl_set_options", 00:06:09.308 "sock_impl_get_options", 00:06:09.308 "iobuf_get_stats", 00:06:09.308 "iobuf_set_options", 00:06:09.308 "keyring_get_keys", 00:06:09.308 "framework_get_pci_devices", 00:06:09.308 "framework_get_config", 00:06:09.308 "framework_get_subsystems", 00:06:09.308 "vfu_tgt_set_base_path", 00:06:09.308 "trace_get_info", 00:06:09.308 "trace_get_tpoint_group_mask", 00:06:09.308 "trace_disable_tpoint_group", 00:06:09.308 "trace_enable_tpoint_group", 00:06:09.308 "trace_clear_tpoint_mask", 00:06:09.308 "trace_set_tpoint_mask", 00:06:09.308 "spdk_get_version", 00:06:09.308 "rpc_get_methods" 00:06:09.308 ] 00:06:09.308 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:09.308 13:52:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1146203 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1146203 ']' 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1146203 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146203 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146203' 00:06:09.308 killing process with pid 1146203 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1146203 00:06:09.308 13:52:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1146203 00:06:09.569 00:06:09.569 real 0m1.418s 00:06:09.569 user 0m2.580s 00:06:09.569 sys 0m0.438s 00:06:09.569 13:52:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.569 13:52:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.569 ************************************ 00:06:09.569 END TEST spdkcli_tcp 00:06:09.569 ************************************ 00:06:09.569 13:52:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.569 13:52:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:09.569 13:52:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.569 13:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.569 13:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:09.569 ************************************ 00:06:09.569 START TEST dpdk_mem_utility 00:06:09.569 ************************************ 00:06:09.569 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:09.829 * Looking for test storage... 00:06:09.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:09.829 13:52:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:09.829 13:52:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1146499 00:06:09.829 13:52:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1146499 00:06:09.829 13:52:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1146499 ']' 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.829 13:52:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.829 [2024-07-15 13:52:07.787120] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.829 [2024-07-15 13:52:07.787176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146499 ] 00:06:09.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.829 [2024-07-15 13:52:07.857513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.829 [2024-07-15 13:52:07.927888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.810 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.810 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:10.810 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:10.810 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:10.810 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.810 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.810 { 00:06:10.810 "filename": "/tmp/spdk_mem_dump.txt" 00:06:10.810 } 00:06:10.810 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.810 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.810 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:10.810 1 heaps totaling size 814.000000 MiB 00:06:10.810 size: 814.000000 MiB heap id: 0 00:06:10.810 end heaps---------- 00:06:10.810 8 mempools totaling size 598.116089 MiB 00:06:10.810 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:10.810 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:10.810 size: 84.521057 MiB name: bdev_io_1146499 00:06:10.810 size: 51.011292 MiB name: evtpool_1146499 00:06:10.810 size: 50.003479 MiB name: msgpool_1146499 00:06:10.810 size: 21.763794 MiB name: PDU_Pool 00:06:10.810 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:10.810 size: 0.026123 MiB name: Session_Pool 00:06:10.810 end mempools------- 00:06:10.810 6 memzones totaling size 4.142822 MiB 00:06:10.810 size: 1.000366 MiB name: RG_ring_0_1146499 00:06:10.810 size: 1.000366 MiB name: RG_ring_1_1146499 00:06:10.810 size: 1.000366 MiB name: RG_ring_4_1146499 00:06:10.810 size: 1.000366 MiB name: RG_ring_5_1146499 00:06:10.810 size: 0.125366 MiB name: RG_ring_2_1146499 00:06:10.810 size: 0.015991 MiB name: RG_ring_3_1146499 00:06:10.810 end memzones------- 00:06:10.810 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:10.810 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:10.810 list of free elements. size: 12.519348 MiB 00:06:10.810 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:10.810 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:10.810 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:10.810 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:10.810 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:10.810 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:10.810 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:10.810 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:10.810 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:10.810 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:10.810 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:10.810 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:10.810 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:10.810 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:10.810 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:10.810 list of standard malloc elements. size: 199.218079 MiB 00:06:10.810 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:10.810 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:10.810 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:10.810 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:10.810 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:10.810 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:10.810 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:10.810 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:10.810 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:10.810 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:10.810 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:10.810 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:10.810 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:10.810 list of memzone associated elements. size: 602.262573 MiB 00:06:10.811 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:10.811 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:10.811 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:10.811 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:10.811 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:10.811 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1146499_0 00:06:10.811 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:10.811 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1146499_0 00:06:10.811 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:10.811 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1146499_0 00:06:10.811 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:10.811 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:10.811 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:10.811 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:10.811 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:10.811 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1146499 00:06:10.811 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:10.811 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1146499 00:06:10.811 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:10.811 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1146499 00:06:10.811 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:10.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:10.811 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:10.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:10.811 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:10.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:10.811 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:10.811 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:10.811 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:10.811 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1146499 00:06:10.811 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:10.811 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1146499 00:06:10.811 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:10.811 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1146499 00:06:10.811 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:10.811 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1146499 00:06:10.811 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:10.811 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1146499 00:06:10.811 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:10.811 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:10.811 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:10.811 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:10.811 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:10.811 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:10.811 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:10.811 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1146499 00:06:10.811 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:10.811 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:10.811 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:10.811 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:10.811 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:10.811 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1146499 00:06:10.811 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:10.811 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:10.811 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:10.811 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1146499 00:06:10.811 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:10.811 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1146499 00:06:10.811 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:10.811 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:10.811 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:10.811 13:52:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1146499 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1146499 ']' 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1146499 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146499 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146499' 00:06:10.811 killing process with pid 1146499 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1146499 00:06:10.811 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1146499 00:06:11.072 00:06:11.072 real 0m1.295s 00:06:11.072 user 0m1.372s 00:06:11.072 sys 0m0.380s 00:06:11.072 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.072 13:52:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.072 ************************************ 00:06:11.072 END TEST dpdk_mem_utility 00:06:11.072 ************************************ 00:06:11.072 13:52:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.072 13:52:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:11.072 13:52:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.072 13:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.072 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:06:11.072 ************************************ 00:06:11.072 START TEST event 00:06:11.072 ************************************ 00:06:11.072 13:52:08 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:11.072 * Looking for test storage... 00:06:11.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:11.072 13:52:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:11.072 13:52:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:11.072 13:52:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.072 13:52:09 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:11.072 13:52:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.072 13:52:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.072 ************************************ 00:06:11.072 START TEST event_perf 00:06:11.072 ************************************ 00:06:11.072 13:52:09 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.072 Running I/O for 1 seconds...[2024-07-15 13:52:09.160160] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.072 [2024-07-15 13:52:09.160262] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146786 ] 00:06:11.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.332 [2024-07-15 13:52:09.232400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.332 [2024-07-15 13:52:09.301025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.332 [2024-07-15 13:52:09.301140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.332 [2024-07-15 13:52:09.301295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.332 Running I/O for 1 seconds...[2024-07-15 13:52:09.301295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.274 00:06:12.274 lcore 0: 174205 00:06:12.274 lcore 1: 174208 00:06:12.274 lcore 2: 174203 00:06:12.274 lcore 3: 174206 00:06:12.274 done. 00:06:12.274 00:06:12.274 real 0m1.217s 00:06:12.274 user 0m4.129s 00:06:12.274 sys 0m0.084s 00:06:12.274 13:52:10 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.274 13:52:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.274 ************************************ 00:06:12.274 END TEST event_perf 00:06:12.274 ************************************ 00:06:12.536 13:52:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.536 13:52:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.536 13:52:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:12.536 13:52:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.536 13:52:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.536 ************************************ 00:06:12.536 START TEST event_reactor 00:06:12.536 ************************************ 00:06:12.536 13:52:10 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.536 [2024-07-15 13:52:10.441507] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.536 [2024-07-15 13:52:10.441600] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147142 ] 00:06:12.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.536 [2024-07-15 13:52:10.511044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.536 [2024-07-15 13:52:10.576245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.919 test_start 00:06:13.919 oneshot 00:06:13.919 tick 100 00:06:13.919 tick 100 00:06:13.919 tick 250 00:06:13.919 tick 100 00:06:13.919 tick 100 00:06:13.919 tick 100 00:06:13.919 tick 250 00:06:13.919 tick 500 00:06:13.919 tick 100 00:06:13.919 tick 100 00:06:13.919 tick 250 00:06:13.919 tick 100 00:06:13.919 tick 100 00:06:13.919 test_end 00:06:13.919 00:06:13.919 real 0m1.208s 00:06:13.919 user 0m1.134s 00:06:13.919 sys 0m0.070s 00:06:13.919 13:52:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.919 13:52:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:13.919 ************************************ 00:06:13.919 END TEST event_reactor 00:06:13.919 ************************************ 00:06:13.919 13:52:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:13.919 13:52:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.919 13:52:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:13.919 13:52:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.919 13:52:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.919 ************************************ 00:06:13.919 START TEST event_reactor_perf 00:06:13.919 ************************************ 00:06:13.919 13:52:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.919 [2024-07-15 13:52:11.712710] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.919 [2024-07-15 13:52:11.712764] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147492 ] 00:06:13.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.919 [2024-07-15 13:52:11.776938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.919 [2024-07-15 13:52:11.840946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.861 test_start 00:06:14.861 test_end 00:06:14.861 Performance: 368520 events per second 00:06:14.861 00:06:14.861 real 0m1.189s 00:06:14.861 user 0m1.118s 00:06:14.861 sys 0m0.067s 00:06:14.861 13:52:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.861 13:52:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.861 ************************************ 00:06:14.861 END TEST event_reactor_perf 00:06:14.861 ************************************ 00:06:14.861 13:52:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.861 13:52:12 event -- event/event.sh@49 -- # uname -s 00:06:14.861 13:52:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.861 13:52:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:14.861 13:52:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.861 13:52:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.861 13:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.122 ************************************ 00:06:15.122 START TEST event_scheduler 00:06:15.122 ************************************ 00:06:15.122 13:52:12 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:15.122 * Looking for test storage... 00:06:15.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:15.122 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:15.122 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1147797 00:06:15.122 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.122 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:15.122 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1147797 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1147797 ']' 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.122 13:52:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.122 [2024-07-15 13:52:13.131921] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.122 [2024-07-15 13:52:13.131992] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147797 ] 00:06:15.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.122 [2024-07-15 13:52:13.195909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.382 [2024-07-15 13:52:13.263344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.382 [2024-07-15 13:52:13.263506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.382 [2024-07-15 13:52:13.263657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.382 [2024-07-15 13:52:13.263659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:15.955 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.955 [2024-07-15 13:52:13.917704] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:15.955 [2024-07-15 13:52:13.917718] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.955 [2024-07-15 13:52:13.917726] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.955 [2024-07-15 13:52:13.917730] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.955 [2024-07-15 13:52:13.917734] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.955 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.955 13:52:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 [2024-07-15 13:52:13.971289] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.956 13:52:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.956 13:52:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.956 13:52:13 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.956 13:52:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.956 13:52:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 ************************************ 00:06:15.956 START TEST scheduler_create_thread 00:06:15.956 ************************************ 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 2 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 3 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 4 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 5 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.956 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 6 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.218 7 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.218 8 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.218 9 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.218 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.479 10 00:06:16.479 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.479 13:52:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:16.479 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.479 13:52:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.889 13:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.889 13:52:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:17.889 13:52:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:17.889 13:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.889 13:52:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.892 13:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.892 13:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.892 13:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.892 13:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.464 13:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.464 13:52:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.464 13:52:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.464 13:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.464 13:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.407 13:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.407 00:06:20.407 real 0m4.222s 00:06:20.407 user 0m0.027s 00:06:20.407 sys 0m0.004s 00:06:20.407 13:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.407 13:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.407 ************************************ 00:06:20.407 END TEST scheduler_create_thread 00:06:20.407 ************************************ 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:20.407 13:52:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.407 13:52:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1147797 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1147797 ']' 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1147797 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147797 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147797' 00:06:20.407 killing process with pid 1147797 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1147797 00:06:20.407 13:52:18 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1147797 00:06:20.407 [2024-07-15 13:52:18.508420] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.668 00:06:20.668 real 0m5.706s 00:06:20.668 user 0m12.709s 00:06:20.668 sys 0m0.359s 00:06:20.668 13:52:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.668 13:52:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.668 ************************************ 00:06:20.668 END TEST event_scheduler 00:06:20.668 ************************************ 00:06:20.668 13:52:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.668 13:52:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.668 13:52:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.668 13:52:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.668 13:52:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.668 13:52:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.668 ************************************ 00:06:20.668 START TEST app_repeat 00:06:20.668 ************************************ 00:06:20.668 13:52:18 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.668 13:52:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1148942 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1148942' 00:06:20.669 Process app_repeat pid: 1148942 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.669 spdk_app_start Round 0 00:06:20.669 13:52:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1148942 /var/tmp/spdk-nbd.sock 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1148942 ']' 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.669 13:52:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.929 [2024-07-15 13:52:18.800954] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.929 [2024-07-15 13:52:18.801018] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148942 ] 00:06:20.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.929 [2024-07-15 13:52:18.872079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.929 [2024-07-15 13:52:18.945770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.929 [2024-07-15 13:52:18.945788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.501 13:52:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.501 13:52:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:21.501 13:52:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.762 Malloc0 00:06:21.762 13:52:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.022 Malloc1 00:06:22.022 13:52:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.022 13:52:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.022 /dev/nbd0 00:06:22.022 13:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.022 13:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.022 1+0 records in 00:06:22.022 1+0 records out 00:06:22.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000122504 s, 33.4 MB/s 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.022 13:52:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.022 13:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.022 13:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.022 13:52:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.281 /dev/nbd1 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.281 1+0 records in 00:06:22.281 1+0 records out 00:06:22.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277044 s, 14.8 MB/s 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.281 13:52:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.281 13:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.541 { 00:06:22.541 "nbd_device": "/dev/nbd0", 00:06:22.541 "bdev_name": "Malloc0" 00:06:22.541 }, 00:06:22.541 { 00:06:22.541 "nbd_device": "/dev/nbd1", 00:06:22.541 "bdev_name": "Malloc1" 00:06:22.541 } 00:06:22.541 ]' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.541 { 00:06:22.541 "nbd_device": "/dev/nbd0", 00:06:22.541 "bdev_name": "Malloc0" 00:06:22.541 }, 00:06:22.541 { 00:06:22.541 "nbd_device": "/dev/nbd1", 00:06:22.541 "bdev_name": "Malloc1" 00:06:22.541 } 00:06:22.541 ]' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.541 /dev/nbd1' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.541 /dev/nbd1' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.541 256+0 records in 00:06:22.541 256+0 records out 00:06:22.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121514 s, 86.3 MB/s 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.541 256+0 records in 00:06:22.541 256+0 records out 00:06:22.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176666 s, 59.4 MB/s 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.541 256+0 records in 00:06:22.541 256+0 records out 00:06:22.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176468 s, 59.4 MB/s 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.541 13:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.801 13:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.063 13:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.063 13:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.063 13:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.063 13:52:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.063 13:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.063 13:52:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.063 13:52:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.323 13:52:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.323 [2024-07-15 13:52:21.425545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.582 [2024-07-15 13:52:21.488648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.583 [2024-07-15 13:52:21.488649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.583 [2024-07-15 13:52:21.519936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.583 [2024-07-15 13:52:21.519974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.879 spdk_app_start Round 1 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1148942 /var/tmp/spdk-nbd.sock 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1148942 ']' 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.879 Malloc0 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.879 Malloc1 00:06:26.879 13:52:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.879 /dev/nbd0 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.879 1+0 records in 00:06:26.879 1+0 records out 00:06:26.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333289 s, 12.3 MB/s 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.879 13:52:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.879 13:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.140 /dev/nbd1 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.140 1+0 records in 00:06:27.140 1+0 records out 00:06:27.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238213 s, 17.2 MB/s 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.140 13:52:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.140 13:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.401 { 00:06:27.401 "nbd_device": "/dev/nbd0", 00:06:27.401 "bdev_name": "Malloc0" 00:06:27.401 }, 00:06:27.401 { 00:06:27.401 "nbd_device": "/dev/nbd1", 00:06:27.401 "bdev_name": "Malloc1" 00:06:27.401 } 00:06:27.401 ]' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.401 { 00:06:27.401 "nbd_device": "/dev/nbd0", 00:06:27.401 "bdev_name": "Malloc0" 00:06:27.401 }, 00:06:27.401 { 00:06:27.401 "nbd_device": "/dev/nbd1", 00:06:27.401 "bdev_name": "Malloc1" 00:06:27.401 } 00:06:27.401 ]' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.401 /dev/nbd1' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.401 /dev/nbd1' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.401 13:52:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.402 256+0 records in 00:06:27.402 256+0 records out 00:06:27.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123304 s, 85.0 MB/s 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.402 256+0 records in 00:06:27.402 256+0 records out 00:06:27.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016551 s, 63.4 MB/s 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.402 256+0 records in 00:06:27.402 256+0 records out 00:06:27.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218358 s, 48.0 MB/s 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.402 13:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.662 13:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.922 13:52:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.922 13:52:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.182 13:52:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.182 [2024-07-15 13:52:26.292283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.442 [2024-07-15 13:52:26.356785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.442 [2024-07-15 13:52:26.356786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.442 [2024-07-15 13:52:26.388799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.442 [2024-07-15 13:52:26.388836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.746 spdk_app_start Round 2 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1148942 /var/tmp/spdk-nbd.sock 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1148942 ']' 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.746 Malloc0 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.746 Malloc1 00:06:31.746 13:52:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.746 /dev/nbd0 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.746 13:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.746 13:52:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.746 1+0 records in 00:06:31.746 1+0 records out 00:06:31.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205637 s, 19.9 MB/s 00:06:32.008 13:52:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.008 13:52:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.008 13:52:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.008 13:52:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.008 13:52:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.008 13:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.008 13:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.008 13:52:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.008 /dev/nbd1 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.008 1+0 records in 00:06:32.008 1+0 records out 00:06:32.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292144 s, 14.0 MB/s 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.008 13:52:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.008 13:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.270 { 00:06:32.270 "nbd_device": "/dev/nbd0", 00:06:32.270 "bdev_name": "Malloc0" 00:06:32.270 }, 00:06:32.270 { 00:06:32.270 "nbd_device": "/dev/nbd1", 00:06:32.270 "bdev_name": "Malloc1" 00:06:32.270 } 00:06:32.270 ]' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.270 { 00:06:32.270 "nbd_device": "/dev/nbd0", 00:06:32.270 "bdev_name": "Malloc0" 00:06:32.270 }, 00:06:32.270 { 00:06:32.270 "nbd_device": "/dev/nbd1", 00:06:32.270 "bdev_name": "Malloc1" 00:06:32.270 } 00:06:32.270 ]' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.270 /dev/nbd1' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.270 /dev/nbd1' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.270 256+0 records in 00:06:32.270 256+0 records out 00:06:32.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120818 s, 86.8 MB/s 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.270 256+0 records in 00:06:32.270 256+0 records out 00:06:32.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179224 s, 58.5 MB/s 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.270 256+0 records in 00:06:32.270 256+0 records out 00:06:32.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168764 s, 62.1 MB/s 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.270 13:52:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.531 13:52:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.791 13:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.792 13:52:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.792 13:52:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.053 13:52:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.313 [2024-07-15 13:52:31.184941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.313 [2024-07-15 13:52:31.248172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.313 [2024-07-15 13:52:31.248173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.313 [2024-07-15 13:52:31.279496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.313 [2024-07-15 13:52:31.279533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.617 13:52:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1148942 /var/tmp/spdk-nbd.sock 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1148942 ']' 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.617 13:52:34 event.app_repeat -- event/event.sh@39 -- # killprocess 1148942 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1148942 ']' 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1148942 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148942 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148942' 00:06:36.617 killing process with pid 1148942 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1148942 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1148942 00:06:36.617 spdk_app_start is called in Round 0. 00:06:36.617 Shutdown signal received, stop current app iteration 00:06:36.617 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:36.617 spdk_app_start is called in Round 1. 00:06:36.617 Shutdown signal received, stop current app iteration 00:06:36.617 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:36.617 spdk_app_start is called in Round 2. 00:06:36.617 Shutdown signal received, stop current app iteration 00:06:36.617 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:36.617 spdk_app_start is called in Round 3. 00:06:36.617 Shutdown signal received, stop current app iteration 00:06:36.617 13:52:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.617 13:52:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.617 00:06:36.617 real 0m15.626s 00:06:36.617 user 0m33.760s 00:06:36.617 sys 0m2.143s 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.617 13:52:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.617 ************************************ 00:06:36.617 END TEST app_repeat 00:06:36.617 ************************************ 00:06:36.617 13:52:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.617 13:52:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.618 13:52:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.618 13:52:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.618 13:52:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.618 13:52:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.618 ************************************ 00:06:36.618 START TEST cpu_locks 00:06:36.618 ************************************ 00:06:36.618 13:52:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.618 * Looking for test storage... 00:06:36.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.618 13:52:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.618 13:52:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.618 13:52:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.618 13:52:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.618 13:52:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.618 13:52:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.618 13:52:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.618 ************************************ 00:06:36.618 START TEST default_locks 00:06:36.618 ************************************ 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1152231 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1152231 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1152231 ']' 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.618 13:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.618 [2024-07-15 13:52:34.650198] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:36.618 [2024-07-15 13:52:34.650247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152231 ] 00:06:36.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.618 [2024-07-15 13:52:34.715212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.878 [2024-07-15 13:52:34.780033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.447 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.447 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:37.447 13:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1152231 00:06:37.447 13:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1152231 00:06:37.447 13:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.016 lslocks: write error 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1152231 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1152231 ']' 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1152231 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152231 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152231' 00:06:38.016 killing process with pid 1152231 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1152231 00:06:38.016 13:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1152231 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1152231 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1152231 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1152231 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1152231 ']' 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1152231) - No such process 00:06:38.295 ERROR: process (pid: 1152231) is no longer running 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.295 13:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.296 13:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.296 13:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.296 00:06:38.296 real 0m1.563s 00:06:38.296 user 0m1.672s 00:06:38.296 sys 0m0.532s 00:06:38.296 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.296 13:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.296 ************************************ 00:06:38.296 END TEST default_locks 00:06:38.296 ************************************ 00:06:38.296 13:52:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.296 13:52:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.296 13:52:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.296 13:52:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.296 13:52:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.296 ************************************ 00:06:38.296 START TEST default_locks_via_rpc 00:06:38.296 ************************************ 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1152585 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1152585 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1152585 ']' 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.296 13:52:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.296 [2024-07-15 13:52:36.289347] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.296 [2024-07-15 13:52:36.289393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152585 ] 00:06:38.296 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.296 [2024-07-15 13:52:36.354465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.556 [2024-07-15 13:52:36.420584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1152585 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1152585 00:06:39.125 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1152585 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1152585 ']' 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1152585 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152585 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152585' 00:06:39.386 killing process with pid 1152585 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1152585 00:06:39.386 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1152585 00:06:39.646 00:06:39.646 real 0m1.406s 00:06:39.646 user 0m1.496s 00:06:39.646 sys 0m0.467s 00:06:39.646 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.646 13:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.646 ************************************ 00:06:39.646 END TEST default_locks_via_rpc 00:06:39.646 ************************************ 00:06:39.646 13:52:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.646 13:52:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.646 13:52:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.646 13:52:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.646 13:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.646 ************************************ 00:06:39.646 START TEST non_locking_app_on_locked_coremask 00:06:39.646 ************************************ 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1152933 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1152933 /var/tmp/spdk.sock 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1152933 ']' 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.646 13:52:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.931 [2024-07-15 13:52:37.779145] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:39.931 [2024-07-15 13:52:37.779211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152933 ] 00:06:39.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.931 [2024-07-15 13:52:37.853860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.931 [2024-07-15 13:52:37.928611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1153260 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1153260 /var/tmp/spdk2.sock 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1153260 ']' 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.502 13:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 [2024-07-15 13:52:38.600250] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.503 [2024-07-15 13:52:38.600303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153260 ] 00:06:40.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.762 [2024-07-15 13:52:38.699364] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.762 [2024-07-15 13:52:38.699395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.762 [2024-07-15 13:52:38.833384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.378 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.378 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.378 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1152933 00:06:41.378 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1152933 00:06:41.378 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.963 lslocks: write error 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1152933 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1152933 ']' 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1152933 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152933 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152933' 00:06:41.963 killing process with pid 1152933 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1152933 00:06:41.963 13:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1152933 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1153260 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1153260 ']' 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1153260 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1153260 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1153260' 00:06:42.277 killing process with pid 1153260 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1153260 00:06:42.277 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1153260 00:06:42.546 00:06:42.546 real 0m2.858s 00:06:42.546 user 0m3.137s 00:06:42.546 sys 0m0.836s 00:06:42.546 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.546 13:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.546 ************************************ 00:06:42.546 END TEST non_locking_app_on_locked_coremask 00:06:42.546 ************************************ 00:06:42.546 13:52:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.546 13:52:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.546 13:52:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.546 13:52:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.546 13:52:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.546 ************************************ 00:06:42.546 START TEST locking_app_on_unlocked_coremask 00:06:42.546 ************************************ 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1153639 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1153639 /var/tmp/spdk.sock 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1153639 ']' 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.546 13:52:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.807 [2024-07-15 13:52:40.697177] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.807 [2024-07-15 13:52:40.697225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153639 ] 00:06:42.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.807 [2024-07-15 13:52:40.763796] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.807 [2024-07-15 13:52:40.763827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.807 [2024-07-15 13:52:40.828372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1153772 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1153772 /var/tmp/spdk2.sock 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1153772 ']' 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.378 13:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.637 [2024-07-15 13:52:41.526820] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:43.637 [2024-07-15 13:52:41.526875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153772 ] 00:06:43.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.637 [2024-07-15 13:52:41.626399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.897 [2024-07-15 13:52:41.760992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.467 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.467 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.467 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1153772 00:06:44.467 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1153772 00:06:44.467 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.728 lslocks: write error 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1153639 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1153639 ']' 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1153639 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1153639 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1153639' 00:06:44.728 killing process with pid 1153639 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1153639 00:06:44.728 13:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1153639 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1153772 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1153772 ']' 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1153772 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1153772 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1153772' 00:06:45.298 killing process with pid 1153772 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1153772 00:06:45.298 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1153772 00:06:45.559 00:06:45.559 real 0m2.852s 00:06:45.559 user 0m3.125s 00:06:45.559 sys 0m0.861s 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.559 ************************************ 00:06:45.559 END TEST locking_app_on_unlocked_coremask 00:06:45.559 ************************************ 00:06:45.559 13:52:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.559 13:52:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.559 13:52:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.559 13:52:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.559 13:52:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.559 ************************************ 00:06:45.559 START TEST locking_app_on_locked_coremask 00:06:45.559 ************************************ 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1154347 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1154347 /var/tmp/spdk.sock 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1154347 ']' 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.559 13:52:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.559 [2024-07-15 13:52:43.625021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:45.559 [2024-07-15 13:52:43.625069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154347 ] 00:06:45.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.820 [2024-07-15 13:52:43.689370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.820 [2024-07-15 13:52:43.753187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1154359 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1154359 /var/tmp/spdk2.sock 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1154359 /var/tmp/spdk2.sock 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1154359 /var/tmp/spdk2.sock 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1154359 ']' 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.390 13:52:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.390 [2024-07-15 13:52:44.407313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:46.390 [2024-07-15 13:52:44.407366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154359 ] 00:06:46.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.390 [2024-07-15 13:52:44.503467] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1154347 has claimed it. 00:06:46.390 [2024-07-15 13:52:44.503506] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1154359) - No such process 00:06:46.960 ERROR: process (pid: 1154359) is no longer running 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1154347 00:06:46.960 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1154347 00:06:46.961 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.551 lslocks: write error 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1154347 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1154347 ']' 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1154347 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1154347 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1154347' 00:06:47.551 killing process with pid 1154347 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1154347 00:06:47.551 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1154347 00:06:47.813 00:06:47.813 real 0m2.196s 00:06:47.813 user 0m2.408s 00:06:47.813 sys 0m0.625s 00:06:47.813 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.813 13:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.813 ************************************ 00:06:47.813 END TEST locking_app_on_locked_coremask 00:06:47.813 ************************************ 00:06:47.813 13:52:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.813 13:52:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.813 13:52:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.813 13:52:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.813 13:52:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.813 ************************************ 00:06:47.813 START TEST locking_overlapped_coremask 00:06:47.813 ************************************ 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1154723 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1154723 /var/tmp/spdk.sock 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1154723 ']' 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.813 13:52:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.813 [2024-07-15 13:52:45.897173] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.813 [2024-07-15 13:52:45.897228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154723 ] 00:06:48.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.074 [2024-07-15 13:52:45.966672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.074 [2024-07-15 13:52:46.040199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.074 [2024-07-15 13:52:46.040315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.074 [2024-07-15 13:52:46.040318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1154915 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1154915 /var/tmp/spdk2.sock 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1154915 /var/tmp/spdk2.sock 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1154915 /var/tmp/spdk2.sock 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1154915 ']' 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.645 13:52:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.645 [2024-07-15 13:52:46.707125] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:48.645 [2024-07-15 13:52:46.707183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154915 ] 00:06:48.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.905 [2024-07-15 13:52:46.786795] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1154723 has claimed it. 00:06:48.905 [2024-07-15 13:52:46.786827] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1154915) - No such process 00:06:49.477 ERROR: process (pid: 1154915) is no longer running 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1154723 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1154723 ']' 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1154723 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1154723 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1154723' 00:06:49.477 killing process with pid 1154723 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1154723 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1154723 00:06:49.477 00:06:49.477 real 0m1.722s 00:06:49.477 user 0m4.814s 00:06:49.477 sys 0m0.384s 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.477 13:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.477 ************************************ 00:06:49.477 END TEST locking_overlapped_coremask 00:06:49.477 ************************************ 00:06:49.738 13:52:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.738 13:52:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.738 13:52:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.738 13:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.738 13:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.738 ************************************ 00:06:49.738 START TEST locking_overlapped_coremask_via_rpc 00:06:49.738 ************************************ 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1155100 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1155100 /var/tmp/spdk.sock 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1155100 ']' 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.738 13:52:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.738 [2024-07-15 13:52:47.695364] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:49.738 [2024-07-15 13:52:47.695411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155100 ] 00:06:49.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.738 [2024-07-15 13:52:47.760739] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.738 [2024-07-15 13:52:47.760767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.738 [2024-07-15 13:52:47.827057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.738 [2024-07-15 13:52:47.827172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.738 [2024-07-15 13:52:47.827174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1155343 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1155343 /var/tmp/spdk2.sock 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1155343 ']' 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.681 13:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.681 [2024-07-15 13:52:48.500066] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.681 [2024-07-15 13:52:48.500118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155343 ] 00:06:50.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.681 [2024-07-15 13:52:48.581906] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.681 [2024-07-15 13:52:48.581926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.681 [2024-07-15 13:52:48.687536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.681 [2024-07-15 13:52:48.687699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.681 [2024-07-15 13:52:48.687701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.251 [2024-07-15 13:52:49.274810] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1155100 has claimed it. 00:06:51.251 request: 00:06:51.251 { 00:06:51.251 "method": "framework_enable_cpumask_locks", 00:06:51.251 "req_id": 1 00:06:51.251 } 00:06:51.251 Got JSON-RPC error response 00:06:51.251 response: 00:06:51.251 { 00:06:51.251 "code": -32603, 00:06:51.251 "message": "Failed to claim CPU core: 2" 00:06:51.251 } 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1155100 /var/tmp/spdk.sock 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1155100 ']' 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.251 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.252 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.252 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1155343 /var/tmp/spdk2.sock 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1155343 ']' 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.512 00:06:51.512 real 0m1.974s 00:06:51.512 user 0m0.760s 00:06:51.512 sys 0m0.148s 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.512 13:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.512 ************************************ 00:06:51.512 END TEST locking_overlapped_coremask_via_rpc 00:06:51.512 ************************************ 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.773 13:52:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.773 13:52:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1155100 ]] 00:06:51.773 13:52:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1155100 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1155100 ']' 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1155100 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1155100 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1155100' 00:06:51.773 killing process with pid 1155100 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1155100 00:06:51.773 13:52:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1155100 00:06:52.034 13:52:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1155343 ]] 00:06:52.034 13:52:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1155343 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1155343 ']' 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1155343 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1155343 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1155343' 00:06:52.034 killing process with pid 1155343 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1155343 00:06:52.034 13:52:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1155343 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1155100 ]] 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1155100 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1155100 ']' 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1155100 00:06:52.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1155100) - No such process 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1155100 is not found' 00:06:52.295 Process with pid 1155100 is not found 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1155343 ]] 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1155343 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1155343 ']' 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1155343 00:06:52.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1155343) - No such process 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1155343 is not found' 00:06:52.295 Process with pid 1155343 is not found 00:06:52.295 13:52:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.295 00:06:52.295 real 0m15.721s 00:06:52.295 user 0m26.864s 00:06:52.295 sys 0m4.738s 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.295 13:52:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.295 ************************************ 00:06:52.295 END TEST cpu_locks 00:06:52.295 ************************************ 00:06:52.295 13:52:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.295 00:06:52.295 real 0m41.222s 00:06:52.295 user 1m19.930s 00:06:52.295 sys 0m7.823s 00:06:52.295 13:52:50 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.295 13:52:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.295 ************************************ 00:06:52.295 END TEST event 00:06:52.295 ************************************ 00:06:52.295 13:52:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.295 13:52:50 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.295 13:52:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.295 13:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.295 13:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.295 ************************************ 00:06:52.295 START TEST thread 00:06:52.295 ************************************ 00:06:52.295 13:52:50 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.295 * Looking for test storage... 00:06:52.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:52.295 13:52:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.295 13:52:50 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:52.295 13:52:50 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.295 13:52:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.608 ************************************ 00:06:52.608 START TEST thread_poller_perf 00:06:52.608 ************************************ 00:06:52.608 13:52:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.608 [2024-07-15 13:52:50.453220] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.608 [2024-07-15 13:52:50.453318] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155862 ] 00:06:52.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.608 [2024-07-15 13:52:50.526034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.608 [2024-07-15 13:52:50.595831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.608 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.549 ====================================== 00:06:53.549 busy:2409793676 (cyc) 00:06:53.549 total_run_count: 288000 00:06:53.549 tsc_hz: 2400000000 (cyc) 00:06:53.549 ====================================== 00:06:53.549 poller_cost: 8367 (cyc), 3486 (nsec) 00:06:53.549 00:06:53.549 real 0m1.226s 00:06:53.549 user 0m1.132s 00:06:53.549 sys 0m0.089s 00:06:53.549 13:52:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.549 13:52:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.549 ************************************ 00:06:53.549 END TEST thread_poller_perf 00:06:53.549 ************************************ 00:06:53.808 13:52:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:53.808 13:52:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.808 13:52:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:53.808 13:52:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.808 13:52:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.808 ************************************ 00:06:53.809 START TEST thread_poller_perf 00:06:53.809 ************************************ 00:06:53.809 13:52:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.809 [2024-07-15 13:52:51.752928] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:53.809 [2024-07-15 13:52:51.753032] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156073 ] 00:06:53.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.809 [2024-07-15 13:52:51.823698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.809 [2024-07-15 13:52:51.889386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.809 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.192 ====================================== 00:06:55.192 busy:2402293456 (cyc) 00:06:55.192 total_run_count: 3768000 00:06:55.192 tsc_hz: 2400000000 (cyc) 00:06:55.192 ====================================== 00:06:55.192 poller_cost: 637 (cyc), 265 (nsec) 00:06:55.192 00:06:55.192 real 0m1.212s 00:06:55.192 user 0m1.131s 00:06:55.192 sys 0m0.077s 00:06:55.192 13:52:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.192 13:52:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.192 ************************************ 00:06:55.192 END TEST thread_poller_perf 00:06:55.192 ************************************ 00:06:55.192 13:52:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.192 13:52:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.192 00:06:55.192 real 0m2.684s 00:06:55.192 user 0m2.357s 00:06:55.192 sys 0m0.336s 00:06:55.192 13:52:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.192 13:52:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.192 ************************************ 00:06:55.192 END TEST thread 00:06:55.192 ************************************ 00:06:55.192 13:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.193 13:52:53 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.193 13:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.193 13:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.193 13:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.193 ************************************ 00:06:55.193 START TEST accel 00:06:55.193 ************************************ 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.193 * Looking for test storage... 00:06:55.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:55.193 13:52:53 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:55.193 13:52:53 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:55.193 13:52:53 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.193 13:52:53 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1156330 00:06:55.193 13:52:53 accel -- accel/accel.sh@63 -- # waitforlisten 1156330 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@829 -- # '[' -z 1156330 ']' 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.193 13:52:53 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.193 13:52:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.193 13:52:53 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:55.193 13:52:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.193 13:52:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.193 13:52:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.193 13:52:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.193 13:52:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.193 13:52:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:55.193 13:52:53 accel -- accel/accel.sh@41 -- # jq -r . 00:06:55.193 [2024-07-15 13:52:53.220601] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:55.193 [2024-07-15 13:52:53.220678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156330 ] 00:06:55.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.193 [2024-07-15 13:52:53.292435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.453 [2024-07-15 13:52:53.369838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.025 13:52:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.025 13:52:53 accel -- common/autotest_common.sh@862 -- # return 0 00:06:56.025 13:52:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:56.025 13:52:54 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:56.025 13:52:54 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:56.025 13:52:54 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:56.025 13:52:54 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.025 13:52:54 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:56.025 13:52:54 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.025 13:52:54 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.025 13:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.025 13:52:54 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.025 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.025 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.025 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.026 13:52:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.026 13:52:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.026 13:52:54 accel -- accel/accel.sh@75 -- # killprocess 1156330 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@948 -- # '[' -z 1156330 ']' 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@952 -- # kill -0 1156330 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@953 -- # uname 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1156330 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1156330' 00:06:56.026 killing process with pid 1156330 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@967 -- # kill 1156330 00:06:56.026 13:52:54 accel -- common/autotest_common.sh@972 -- # wait 1156330 00:06:56.285 13:52:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:56.285 13:52:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:56.285 13:52:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:56.285 13:52:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.285 13:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.285 13:52:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:56.285 13:52:54 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:56.285 13:52:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:56.285 13:52:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:56.286 13:52:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:56.286 13:52:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.286 13:52:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:56.545 13:52:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.545 13:52:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:56.545 13:52:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.545 13:52:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.545 13:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.545 ************************************ 00:06:56.545 START TEST accel_missing_filename 00:06:56.545 ************************************ 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.545 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:56.545 13:52:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:56.545 [2024-07-15 13:52:54.492165] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.545 [2024-07-15 13:52:54.492260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156661 ] 00:06:56.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.545 [2024-07-15 13:52:54.560936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.545 [2024-07-15 13:52:54.626174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.545 [2024-07-15 13:52:54.657978] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.805 [2024-07-15 13:52:54.694782] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:56.805 A filename is required. 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.805 00:06:56.805 real 0m0.287s 00:06:56.805 user 0m0.219s 00:06:56.805 sys 0m0.109s 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.805 13:52:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 ************************************ 00:06:56.805 END TEST accel_missing_filename 00:06:56.805 ************************************ 00:06:56.805 13:52:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.805 13:52:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.805 13:52:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:56.805 13:52:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.805 13:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 ************************************ 00:06:56.805 START TEST accel_compress_verify 00:06:56.805 ************************************ 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.805 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.806 13:52:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:56.806 13:52:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:56.806 [2024-07-15 13:52:54.854763] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.806 [2024-07-15 13:52:54.854829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156725 ] 00:06:56.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.067 [2024-07-15 13:52:54.924830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.067 [2024-07-15 13:52:54.996925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.067 [2024-07-15 13:52:55.028802] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.067 [2024-07-15 13:52:55.065684] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:57.067 00:06:57.067 Compression does not support the verify option, aborting. 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.067 00:06:57.067 real 0m0.295s 00:06:57.067 user 0m0.221s 00:06:57.067 sys 0m0.114s 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.067 13:52:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:57.067 ************************************ 00:06:57.067 END TEST accel_compress_verify 00:06:57.067 ************************************ 00:06:57.067 13:52:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.067 13:52:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:57.067 13:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.067 13:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.067 13:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.329 ************************************ 00:06:57.329 START TEST accel_wrong_workload 00:06:57.329 ************************************ 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:57.329 13:52:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:57.329 Unsupported workload type: foobar 00:06:57.329 [2024-07-15 13:52:55.226791] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:57.329 accel_perf options: 00:06:57.329 [-h help message] 00:06:57.329 [-q queue depth per core] 00:06:57.329 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.329 [-T number of threads per core 00:06:57.329 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.329 [-t time in seconds] 00:06:57.329 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.329 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.329 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.329 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.329 [-S for crc32c workload, use this seed value (default 0) 00:06:57.329 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.329 [-f for fill workload, use this BYTE value (default 255) 00:06:57.329 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.329 [-y verify result if this switch is on] 00:06:57.329 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.329 Can be used to spread operations across a wider range of memory. 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.329 00:06:57.329 real 0m0.038s 00:06:57.329 user 0m0.020s 00:06:57.329 sys 0m0.017s 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.329 13:52:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:57.329 ************************************ 00:06:57.329 END TEST accel_wrong_workload 00:06:57.329 ************************************ 00:06:57.329 Error: writing output failed: Broken pipe 00:06:57.329 13:52:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.329 13:52:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.329 13:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:57.329 13:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.329 13:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.329 ************************************ 00:06:57.329 START TEST accel_negative_buffers 00:06:57.329 ************************************ 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.329 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:57.329 13:52:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:57.329 -x option must be non-negative. 00:06:57.329 [2024-07-15 13:52:55.340745] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:57.329 accel_perf options: 00:06:57.329 [-h help message] 00:06:57.329 [-q queue depth per core] 00:06:57.329 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.329 [-T number of threads per core 00:06:57.329 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.329 [-t time in seconds] 00:06:57.329 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.329 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.329 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.329 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.329 [-S for crc32c workload, use this seed value (default 0) 00:06:57.330 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.330 [-f for fill workload, use this BYTE value (default 255) 00:06:57.330 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.330 [-y verify result if this switch is on] 00:06:57.330 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.330 Can be used to spread operations across a wider range of memory. 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.330 00:06:57.330 real 0m0.038s 00:06:57.330 user 0m0.016s 00:06:57.330 sys 0m0.021s 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.330 13:52:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:57.330 ************************************ 00:06:57.330 END TEST accel_negative_buffers 00:06:57.330 ************************************ 00:06:57.330 Error: writing output failed: Broken pipe 00:06:57.330 13:52:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.330 13:52:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:57.330 13:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:57.330 13:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.330 13:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.330 ************************************ 00:06:57.330 START TEST accel_crc32c 00:06:57.330 ************************************ 00:06:57.330 13:52:55 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.330 13:52:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.591 [2024-07-15 13:52:55.451199] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.591 [2024-07-15 13:52:55.451266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157069 ] 00:06:57.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.591 [2024-07-15 13:52:55.519895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.591 [2024-07-15 13:52:55.584478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.591 13:52:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.975 13:52:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.975 00:06:58.975 real 0m1.291s 00:06:58.975 user 0m1.190s 00:06:58.975 sys 0m0.113s 00:06:58.975 13:52:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.975 13:52:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.975 ************************************ 00:06:58.975 END TEST accel_crc32c 00:06:58.975 ************************************ 00:06:58.975 13:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.975 13:52:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.975 13:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.975 13:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.975 13:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.975 ************************************ 00:06:58.975 START TEST accel_crc32c_C2 00:06:58.975 ************************************ 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.975 [2024-07-15 13:52:56.816822] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.975 [2024-07-15 13:52:56.816919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157277 ] 00:06:58.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.975 [2024-07-15 13:52:56.888914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.975 [2024-07-15 13:52:56.959141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.975 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.976 13:52:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.362 00:07:00.362 real 0m1.301s 00:07:00.362 user 0m1.203s 00:07:00.362 sys 0m0.110s 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.362 13:52:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:00.362 ************************************ 00:07:00.362 END TEST accel_crc32c_C2 00:07:00.362 ************************************ 00:07:00.362 13:52:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.362 13:52:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:00.362 13:52:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:00.362 13:52:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.362 13:52:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.362 ************************************ 00:07:00.362 START TEST accel_copy 00:07:00.362 ************************************ 00:07:00.362 13:52:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:00.362 [2024-07-15 13:52:58.196228] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:00.362 [2024-07-15 13:52:58.196289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157481 ] 00:07:00.362 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.362 [2024-07-15 13:52:58.265106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.362 [2024-07-15 13:52:58.333899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.362 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.363 13:52:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.747 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:01.748 13:52:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.748 00:07:01.748 real 0m1.296s 00:07:01.748 user 0m1.192s 00:07:01.748 sys 0m0.115s 00:07:01.748 13:52:59 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.748 13:52:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.748 ************************************ 00:07:01.748 END TEST accel_copy 00:07:01.748 ************************************ 00:07:01.748 13:52:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.748 13:52:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.748 13:52:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:01.748 13:52:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.748 13:52:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.748 ************************************ 00:07:01.748 START TEST accel_fill 00:07:01.748 ************************************ 00:07:01.748 13:52:59 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:01.748 [2024-07-15 13:52:59.568728] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.748 [2024-07-15 13:52:59.568804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157809 ] 00:07:01.748 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.748 [2024-07-15 13:52:59.639004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.748 [2024-07-15 13:52:59.710125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.748 13:52:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.722 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.722 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.722 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.722 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:02.983 13:53:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.983 00:07:02.983 real 0m1.300s 00:07:02.983 user 0m1.200s 00:07:02.983 sys 0m0.111s 00:07:02.983 13:53:00 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.983 13:53:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:02.983 ************************************ 00:07:02.983 END TEST accel_fill 00:07:02.983 ************************************ 00:07:02.983 13:53:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.983 13:53:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.983 13:53:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.983 13:53:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.983 13:53:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.983 ************************************ 00:07:02.983 START TEST accel_copy_crc32c 00:07:02.983 ************************************ 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:02.983 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:02.984 13:53:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:02.984 [2024-07-15 13:53:00.948172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:02.984 [2024-07-15 13:53:00.948285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158164 ] 00:07:02.984 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.984 [2024-07-15 13:53:01.023855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.984 [2024-07-15 13:53:01.093013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.244 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.245 13:53:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.187 00:07:04.187 real 0m1.305s 00:07:04.187 user 0m1.201s 00:07:04.187 sys 0m0.116s 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.187 13:53:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:04.187 ************************************ 00:07:04.187 END TEST accel_copy_crc32c 00:07:04.187 ************************************ 00:07:04.187 13:53:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.187 13:53:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.187 13:53:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:04.187 13:53:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.187 13:53:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.449 ************************************ 00:07:04.449 START TEST accel_copy_crc32c_C2 00:07:04.449 ************************************ 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:04.449 [2024-07-15 13:53:02.329660] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.449 [2024-07-15 13:53:02.329774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158513 ] 00:07:04.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.449 [2024-07-15 13:53:02.409708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.449 [2024-07-15 13:53:02.478111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 13:53:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.833 00:07:05.833 real 0m1.308s 00:07:05.833 user 0m1.210s 00:07:05.833 sys 0m0.110s 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.833 13:53:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.833 ************************************ 00:07:05.833 END TEST accel_copy_crc32c_C2 00:07:05.833 ************************************ 00:07:05.833 13:53:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.833 13:53:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.833 13:53:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.833 13:53:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.833 13:53:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.833 ************************************ 00:07:05.833 START TEST accel_dualcast 00:07:05.833 ************************************ 00:07:05.833 13:53:03 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:05.833 [2024-07-15 13:53:03.713073] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:05.833 [2024-07-15 13:53:03.713168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158793 ] 00:07:05.833 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.833 [2024-07-15 13:53:03.783369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.833 [2024-07-15 13:53:03.852643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 13:53:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:07.219 13:53:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.219 00:07:07.219 real 0m1.299s 00:07:07.219 user 0m1.202s 00:07:07.219 sys 0m0.108s 00:07:07.219 13:53:04 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.219 13:53:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:07.219 ************************************ 00:07:07.219 END TEST accel_dualcast 00:07:07.219 ************************************ 00:07:07.219 13:53:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.219 13:53:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:07.219 13:53:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.219 13:53:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.219 13:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.219 ************************************ 00:07:07.219 START TEST accel_compare 00:07:07.219 ************************************ 00:07:07.219 13:53:05 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:07.219 [2024-07-15 13:53:05.089982] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:07.219 [2024-07-15 13:53:05.090080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158984 ] 00:07:07.219 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.219 [2024-07-15 13:53:05.167923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.219 [2024-07-15 13:53:05.237546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.219 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.220 13:53:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.606 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:08.607 13:53:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.607 00:07:08.607 real 0m1.306s 00:07:08.607 user 0m1.194s 00:07:08.607 sys 0m0.123s 00:07:08.607 13:53:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.607 13:53:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:08.607 ************************************ 00:07:08.607 END TEST accel_compare 00:07:08.607 ************************************ 00:07:08.607 13:53:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.607 13:53:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.607 13:53:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.607 13:53:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.607 13:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.607 ************************************ 00:07:08.607 START TEST accel_xor 00:07:08.607 ************************************ 00:07:08.607 13:53:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:08.607 [2024-07-15 13:53:06.472059] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:08.607 [2024-07-15 13:53:06.472170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159252 ] 00:07:08.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.607 [2024-07-15 13:53:06.553540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.607 [2024-07-15 13:53:06.628196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.607 13:53:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.993 00:07:09.993 real 0m1.316s 00:07:09.993 user 0m1.201s 00:07:09.993 sys 0m0.126s 00:07:09.993 13:53:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.993 13:53:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.993 ************************************ 00:07:09.993 END TEST accel_xor 00:07:09.993 ************************************ 00:07:09.993 13:53:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.993 13:53:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.993 13:53:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.993 13:53:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.993 13:53:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.993 ************************************ 00:07:09.993 START TEST accel_xor 00:07:09.993 ************************************ 00:07:09.993 13:53:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.993 13:53:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.993 [2024-07-15 13:53:07.858234] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:09.993 [2024-07-15 13:53:07.858302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159601 ] 00:07:09.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.993 [2024-07-15 13:53:07.930321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.993 [2024-07-15 13:53:08.000800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.993 13:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.378 13:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.378 00:07:11.378 real 0m1.301s 00:07:11.378 user 0m1.199s 00:07:11.378 sys 0m0.114s 00:07:11.378 13:53:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.378 13:53:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.378 ************************************ 00:07:11.378 END TEST accel_xor 00:07:11.378 ************************************ 00:07:11.378 13:53:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.378 13:53:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:11.378 13:53:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:11.378 13:53:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.378 13:53:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.378 ************************************ 00:07:11.378 START TEST accel_dif_verify 00:07:11.378 ************************************ 00:07:11.378 13:53:09 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.378 [2024-07-15 13:53:09.230291] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:11.378 [2024-07-15 13:53:09.230355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159954 ] 00:07:11.378 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.378 [2024-07-15 13:53:09.298577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.378 [2024-07-15 13:53:09.364511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.378 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.379 13:53:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.763 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:12.764 13:53:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.764 00:07:12.764 real 0m1.293s 00:07:12.764 user 0m1.197s 00:07:12.764 sys 0m0.108s 00:07:12.764 13:53:10 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.764 13:53:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.764 ************************************ 00:07:12.764 END TEST accel_dif_verify 00:07:12.764 ************************************ 00:07:12.764 13:53:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.764 13:53:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:12.764 13:53:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.764 13:53:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.764 13:53:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.764 ************************************ 00:07:12.764 START TEST accel_dif_generate 00:07:12.764 ************************************ 00:07:12.764 13:53:10 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:12.764 [2024-07-15 13:53:10.599386] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:12.764 [2024-07-15 13:53:10.599470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160267 ] 00:07:12.764 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.764 [2024-07-15 13:53:10.669822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.764 [2024-07-15 13:53:10.739170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.764 13:53:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:14.153 13:53:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.153 00:07:14.153 real 0m1.297s 00:07:14.153 user 0m1.203s 00:07:14.153 sys 0m0.107s 00:07:14.153 13:53:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.153 13:53:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:14.153 ************************************ 00:07:14.153 END TEST accel_dif_generate 00:07:14.153 ************************************ 00:07:14.153 13:53:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.153 13:53:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:14.153 13:53:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.153 13:53:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.153 13:53:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.153 ************************************ 00:07:14.153 START TEST accel_dif_generate_copy 00:07:14.153 ************************************ 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:14.153 13:53:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:14.153 [2024-07-15 13:53:11.974897] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.153 [2024-07-15 13:53:11.974996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160457 ] 00:07:14.153 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.153 [2024-07-15 13:53:12.046011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.153 [2024-07-15 13:53:12.115200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.153 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.154 13:53:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.537 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.537 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.537 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.538 00:07:15.538 real 0m1.300s 00:07:15.538 user 0m1.205s 00:07:15.538 sys 0m0.107s 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.538 13:53:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 ************************************ 00:07:15.538 END TEST accel_dif_generate_copy 00:07:15.538 ************************************ 00:07:15.538 13:53:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.538 13:53:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:15.538 13:53:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.538 13:53:13 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.538 13:53:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.538 13:53:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 ************************************ 00:07:15.538 START TEST accel_comp 00:07:15.538 ************************************ 00:07:15.538 13:53:13 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:15.538 [2024-07-15 13:53:13.350803] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:15.538 [2024-07-15 13:53:13.350872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160706 ] 00:07:15.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.538 [2024-07-15 13:53:13.422828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.538 [2024-07-15 13:53:13.496560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.538 13:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:16.930 13:53:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.930 00:07:16.930 real 0m1.307s 00:07:16.930 user 0m1.205s 00:07:16.930 sys 0m0.115s 00:07:16.930 13:53:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.930 13:53:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:16.930 ************************************ 00:07:16.930 END TEST accel_comp 00:07:16.930 ************************************ 00:07:16.930 13:53:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.930 13:53:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.930 13:53:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.930 13:53:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.930 13:53:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.931 ************************************ 00:07:16.931 START TEST accel_decomp 00:07:16.931 ************************************ 00:07:16.931 13:53:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:16.931 [2024-07-15 13:53:14.733429] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.931 [2024-07-15 13:53:14.733494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161056 ] 00:07:16.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.931 [2024-07-15 13:53:14.801275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.931 [2024-07-15 13:53:14.867121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.931 13:53:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.314 13:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.314 00:07:18.314 real 0m1.294s 00:07:18.314 user 0m1.209s 00:07:18.314 sys 0m0.098s 00:07:18.314 13:53:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.314 13:53:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:18.314 ************************************ 00:07:18.314 END TEST accel_decomp 00:07:18.314 ************************************ 00:07:18.314 13:53:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.314 13:53:16 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.314 13:53:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.314 13:53:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.314 13:53:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.314 ************************************ 00:07:18.314 START TEST accel_decomp_full 00:07:18.314 ************************************ 00:07:18.314 13:53:16 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:18.314 [2024-07-15 13:53:16.103235] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:18.314 [2024-07-15 13:53:16.103297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161409 ] 00:07:18.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.314 [2024-07-15 13:53:16.171316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.314 [2024-07-15 13:53:16.238663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.314 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.315 13:53:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.282 13:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.282 00:07:19.282 real 0m1.309s 00:07:19.282 user 0m1.210s 00:07:19.282 sys 0m0.112s 00:07:19.282 13:53:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.282 13:53:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:19.282 ************************************ 00:07:19.282 END TEST accel_decomp_full 00:07:19.282 ************************************ 00:07:19.543 13:53:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.543 13:53:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.543 13:53:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:19.543 13:53:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.543 13:53:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.543 ************************************ 00:07:19.543 START TEST accel_decomp_mcore 00:07:19.543 ************************************ 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:19.543 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:19.543 [2024-07-15 13:53:17.487365] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.543 [2024-07-15 13:53:17.487431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161750 ] 00:07:19.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.543 [2024-07-15 13:53:17.557869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.543 [2024-07-15 13:53:17.631510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.543 [2024-07-15 13:53:17.631628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.543 [2024-07-15 13:53:17.631817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.543 [2024-07-15 13:53:17.631817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.745 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.746 00:07:20.746 real 0m1.311s 00:07:20.746 user 0m4.444s 00:07:20.746 sys 0m0.115s 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.746 13:53:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 END TEST accel_decomp_mcore 00:07:20.746 ************************************ 00:07:20.746 13:53:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.746 13:53:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.746 13:53:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:20.746 13:53:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.746 13:53:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.746 ************************************ 00:07:20.746 START TEST accel_decomp_full_mcore 00:07:20.746 ************************************ 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:20.746 13:53:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.007 [2024-07-15 13:53:18.877336] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:21.008 [2024-07-15 13:53:18.877420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161946 ] 00:07:21.008 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.008 [2024-07-15 13:53:18.949687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.008 [2024-07-15 13:53:19.024838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.008 [2024-07-15 13:53:19.024977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.008 [2024-07-15 13:53:19.025138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.008 [2024-07-15 13:53:19.025139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.008 13:53:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.393 00:07:22.393 real 0m1.325s 00:07:22.393 user 0m4.480s 00:07:22.393 sys 0m0.121s 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.393 13:53:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.393 ************************************ 00:07:22.393 END TEST accel_decomp_full_mcore 00:07:22.393 ************************************ 00:07:22.393 13:53:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.393 13:53:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.393 13:53:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.393 13:53:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.393 13:53:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.393 ************************************ 00:07:22.393 START TEST accel_decomp_mthread 00:07:22.393 ************************************ 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.393 [2024-07-15 13:53:20.280473] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.393 [2024-07-15 13:53:20.280567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162169 ] 00:07:22.393 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.393 [2024-07-15 13:53:20.349825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.393 [2024-07-15 13:53:20.417138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.393 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.394 13:53:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.778 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.778 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.779 00:07:23.779 real 0m1.302s 00:07:23.779 user 0m1.201s 00:07:23.779 sys 0m0.113s 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.779 13:53:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:23.779 ************************************ 00:07:23.779 END TEST accel_decomp_mthread 00:07:23.779 ************************************ 00:07:23.779 13:53:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.779 13:53:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.779 13:53:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.779 13:53:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.779 13:53:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.779 ************************************ 00:07:23.779 START TEST accel_decomp_full_mthread 00:07:23.779 ************************************ 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:23.779 [2024-07-15 13:53:21.657432] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.779 [2024-07-15 13:53:21.657500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162503 ] 00:07:23.779 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.779 [2024-07-15 13:53:21.725828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.779 [2024-07-15 13:53:21.791366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.779 13:53:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.165 00:07:25.165 real 0m1.324s 00:07:25.165 user 0m1.238s 00:07:25.165 sys 0m0.099s 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.165 13:53:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.165 ************************************ 00:07:25.165 END TEST accel_decomp_full_mthread 00:07:25.165 ************************************ 00:07:25.165 13:53:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.165 13:53:22 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:25.165 13:53:22 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.165 13:53:22 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.165 13:53:22 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:25.165 13:53:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.165 13:53:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.165 13:53:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.165 13:53:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.165 13:53:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.165 13:53:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.165 13:53:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.165 13:53:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.165 13:53:22 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.165 ************************************ 00:07:25.165 START TEST accel_dif_functional_tests 00:07:25.165 ************************************ 00:07:25.165 13:53:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.165 [2024-07-15 13:53:23.078234] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:25.165 [2024-07-15 13:53:23.078292] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162853 ] 00:07:25.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.165 [2024-07-15 13:53:23.145092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.165 [2024-07-15 13:53:23.214168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.165 [2024-07-15 13:53:23.214284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.165 [2024-07-15 13:53:23.214286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.165 00:07:25.165 00:07:25.165 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.165 http://cunit.sourceforge.net/ 00:07:25.165 00:07:25.165 00:07:25.165 Suite: accel_dif 00:07:25.165 Test: verify: DIF generated, GUARD check ...passed 00:07:25.165 Test: verify: DIF generated, APPTAG check ...passed 00:07:25.165 Test: verify: DIF generated, REFTAG check ...passed 00:07:25.165 Test: verify: DIF not generated, GUARD check ...[2024-07-15 13:53:23.269310] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.165 passed 00:07:25.165 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:53:23.269354] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.165 passed 00:07:25.165 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:53:23.269374] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.165 passed 00:07:25.165 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:25.165 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:53:23.269422] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:25.165 passed 00:07:25.165 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:25.165 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:25.165 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:25.165 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:53:23.269535] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:25.165 passed 00:07:25.165 Test: verify copy: DIF generated, GUARD check ...passed 00:07:25.165 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:25.165 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:25.166 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:53:23.269656] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.166 passed 00:07:25.166 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:53:23.269678] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.166 passed 00:07:25.166 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:53:23.269700] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.166 passed 00:07:25.166 Test: generate copy: DIF generated, GUARD check ...passed 00:07:25.166 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:25.166 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:25.166 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:25.166 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:25.166 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:25.166 Test: generate copy: iovecs-len validate ...[2024-07-15 13:53:23.269890] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:25.166 passed 00:07:25.166 Test: generate copy: buffer alignment validate ...passed 00:07:25.166 00:07:25.166 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.166 suites 1 1 n/a 0 0 00:07:25.166 tests 26 26 26 0 0 00:07:25.166 asserts 115 115 115 0 n/a 00:07:25.166 00:07:25.166 Elapsed time = 0.002 seconds 00:07:25.426 00:07:25.426 real 0m0.357s 00:07:25.426 user 0m0.492s 00:07:25.426 sys 0m0.127s 00:07:25.426 13:53:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.426 13:53:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 END TEST accel_dif_functional_tests 00:07:25.426 ************************************ 00:07:25.426 13:53:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.426 00:07:25.426 real 0m30.366s 00:07:25.426 user 0m33.730s 00:07:25.426 sys 0m4.384s 00:07:25.426 13:53:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.426 13:53:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 END TEST accel 00:07:25.426 ************************************ 00:07:25.426 13:53:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.426 13:53:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.426 13:53:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.426 13:53:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.426 13:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.426 ************************************ 00:07:25.426 START TEST accel_rpc 00:07:25.426 ************************************ 00:07:25.426 13:53:23 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.690 * Looking for test storage... 00:07:25.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:25.690 13:53:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.690 13:53:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1162931 00:07:25.690 13:53:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1162931 00:07:25.690 13:53:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1162931 ']' 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.690 13:53:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.690 [2024-07-15 13:53:23.656902] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:25.690 [2024-07-15 13:53:23.656978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162931 ] 00:07:25.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.690 [2024-07-15 13:53:23.729616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.012 [2024-07-15 13:53:23.805523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 ************************************ 00:07:26.585 START TEST accel_assign_opcode 00:07:26.585 ************************************ 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 [2024-07-15 13:53:24.459442] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 [2024-07-15 13:53:24.471469] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.585 software 00:07:26.585 00:07:26.585 real 0m0.208s 00:07:26.585 user 0m0.042s 00:07:26.585 sys 0m0.017s 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.585 13:53:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.585 ************************************ 00:07:26.585 END TEST accel_assign_opcode 00:07:26.585 ************************************ 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:26.585 13:53:24 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1162931 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1162931 ']' 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1162931 00:07:26.585 13:53:24 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1162931 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1162931' 00:07:26.845 killing process with pid 1162931 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@967 -- # kill 1162931 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@972 -- # wait 1162931 00:07:26.845 00:07:26.845 real 0m1.432s 00:07:26.845 user 0m1.490s 00:07:26.845 sys 0m0.408s 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.845 13:53:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.845 ************************************ 00:07:26.845 END TEST accel_rpc 00:07:26.845 ************************************ 00:07:27.106 13:53:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.106 13:53:24 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.106 13:53:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.106 13:53:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.106 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.106 ************************************ 00:07:27.106 START TEST app_cmdline 00:07:27.106 ************************************ 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.106 * Looking for test storage... 00:07:27.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.106 13:53:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.106 13:53:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1163335 00:07:27.106 13:53:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1163335 00:07:27.106 13:53:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1163335 ']' 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.106 13:53:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.106 [2024-07-15 13:53:25.179277] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:27.106 [2024-07-15 13:53:25.179343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163335 ] 00:07:27.106 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.367 [2024-07-15 13:53:25.253334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.367 [2024-07-15 13:53:25.326874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.939 13:53:25 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.939 13:53:25 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:27.939 13:53:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.200 { 00:07:28.200 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:07:28.200 "fields": { 00:07:28.200 "major": 24, 00:07:28.200 "minor": 9, 00:07:28.200 "patch": 0, 00:07:28.200 "suffix": "-pre", 00:07:28.200 "commit": "2728651ee" 00:07:28.200 } 00:07:28.200 } 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.200 13:53:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.200 13:53:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.200 request: 00:07:28.200 { 00:07:28.200 "method": "env_dpdk_get_mem_stats", 00:07:28.200 "req_id": 1 00:07:28.200 } 00:07:28.200 Got JSON-RPC error response 00:07:28.200 response: 00:07:28.200 { 00:07:28.200 "code": -32601, 00:07:28.200 "message": "Method not found" 00:07:28.200 } 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.461 13:53:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1163335 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1163335 ']' 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1163335 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163335 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163335' 00:07:28.461 killing process with pid 1163335 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 1163335 00:07:28.461 13:53:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 1163335 00:07:28.723 00:07:28.723 real 0m1.573s 00:07:28.723 user 0m1.893s 00:07:28.723 sys 0m0.414s 00:07:28.723 13:53:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.723 13:53:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.723 ************************************ 00:07:28.723 END TEST app_cmdline 00:07:28.723 ************************************ 00:07:28.723 13:53:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.723 13:53:26 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.723 13:53:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.723 13:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.723 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:28.723 ************************************ 00:07:28.723 START TEST version 00:07:28.723 ************************************ 00:07:28.723 13:53:26 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.723 * Looking for test storage... 00:07:28.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.723 13:53:26 version -- app/version.sh@17 -- # get_header_version major 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # cut -f2 00:07:28.723 13:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.723 13:53:26 version -- app/version.sh@17 -- # major=24 00:07:28.723 13:53:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.723 13:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # cut -f2 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.723 13:53:26 version -- app/version.sh@18 -- # minor=9 00:07:28.723 13:53:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.723 13:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # cut -f2 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.723 13:53:26 version -- app/version.sh@19 -- # patch=0 00:07:28.723 13:53:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.723 13:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # cut -f2 00:07:28.723 13:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.723 13:53:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.723 13:53:26 version -- app/version.sh@22 -- # version=24.9 00:07:28.723 13:53:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.723 13:53:26 version -- app/version.sh@28 -- # version=24.9rc0 00:07:28.723 13:53:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.723 13:53:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.723 13:53:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:28.723 13:53:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:28.723 00:07:28.723 real 0m0.172s 00:07:28.723 user 0m0.089s 00:07:28.723 sys 0m0.116s 00:07:28.723 13:53:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.984 13:53:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:28.984 ************************************ 00:07:28.984 END TEST version 00:07:28.984 ************************************ 00:07:28.984 13:53:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.984 13:53:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@198 -- # uname -s 00:07:28.984 13:53:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:28.984 13:53:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:28.984 13:53:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:28.984 13:53:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:28.984 13:53:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.984 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:28.984 13:53:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:28.984 13:53:26 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:28.984 13:53:26 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:28.985 13:53:26 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:28.985 13:53:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.985 13:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.985 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:28.985 ************************************ 00:07:28.985 START TEST nvmf_tcp 00:07:28.985 ************************************ 00:07:28.985 13:53:26 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:28.985 * Looking for test storage... 00:07:28.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.985 13:53:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.247 13:53:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.247 13:53:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.247 13:53:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.247 13:53:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.247 13:53:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.247 13:53:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.247 13:53:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:29.247 13:53:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:29.247 13:53:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.247 13:53:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:29.247 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.247 13:53:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.247 13:53:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.247 13:53:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.247 ************************************ 00:07:29.247 START TEST nvmf_example 00:07:29.247 ************************************ 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.247 * Looking for test storage... 00:07:29.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:29.247 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.248 13:53:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.382 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:37.383 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:37.383 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:37.383 Found net devices under 0000:31:00.0: cvl_0_0 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:37.383 Found net devices under 0000:31:00.1: cvl_0_1 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.383 13:53:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:07:37.383 00:07:37.383 --- 10.0.0.2 ping statistics --- 00:07:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.383 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:07:37.383 00:07:37.383 --- 10.0.0.1 ping statistics --- 00:07:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.383 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1168115 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1168115 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1168115 ']' 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.383 13:53:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.383 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.954 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.954 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:37.954 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:37.954 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.954 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.215 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:38.216 13:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:38.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.450 Initializing NVMe Controllers 00:07:50.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.450 Initialization complete. Launching workers. 00:07:50.450 ======================================================== 00:07:50.450 Latency(us) 00:07:50.450 Device Information : IOPS MiB/s Average min max 00:07:50.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19117.98 74.68 3347.51 564.83 41888.74 00:07:50.450 ======================================================== 00:07:50.450 Total : 19117.98 74.68 3347.51 564.83 41888.74 00:07:50.450 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.450 rmmod nvme_tcp 00:07:50.450 rmmod nvme_fabrics 00:07:50.450 rmmod nvme_keyring 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1168115 ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1168115 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1168115 ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1168115 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1168115 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1168115' 00:07:50.450 killing process with pid 1168115 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1168115 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1168115 00:07:50.450 nvmf threads initialize successfully 00:07:50.450 bdev subsystem init successfully 00:07:50.450 created a nvmf target service 00:07:50.450 create targets's poll groups done 00:07:50.450 all subsystems of target started 00:07:50.450 nvmf target is running 00:07:50.450 all subsystems of target stopped 00:07:50.450 destroy targets's poll groups done 00:07:50.450 destroyed the nvmf target service 00:07:50.450 bdev subsystem finish successfully 00:07:50.450 nvmf threads destroy successfully 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.450 13:53:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.710 00:07:50.710 real 0m21.598s 00:07:50.710 user 0m45.644s 00:07:50.710 sys 0m7.345s 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.710 13:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.710 ************************************ 00:07:50.710 END TEST nvmf_example 00:07:50.711 ************************************ 00:07:50.711 13:53:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:50.711 13:53:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.711 13:53:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.711 13:53:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.711 13:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 START TEST nvmf_filesystem 00:07:50.711 ************************************ 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.974 * Looking for test storage... 00:07:50.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:50.974 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:50.975 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.975 #define SPDK_CONFIG_H 00:07:50.975 #define SPDK_CONFIG_APPS 1 00:07:50.975 #define SPDK_CONFIG_ARCH native 00:07:50.975 #undef SPDK_CONFIG_ASAN 00:07:50.975 #undef SPDK_CONFIG_AVAHI 00:07:50.975 #undef SPDK_CONFIG_CET 00:07:50.975 #define SPDK_CONFIG_COVERAGE 1 00:07:50.975 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.975 #undef SPDK_CONFIG_CRYPTO 00:07:50.975 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.975 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.975 #undef SPDK_CONFIG_DAOS 00:07:50.975 #define SPDK_CONFIG_DAOS_DIR 00:07:50.975 #define SPDK_CONFIG_DEBUG 1 00:07:50.975 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.975 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:50.975 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:50.975 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:50.975 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.975 #undef SPDK_CONFIG_DPDK_UADK 00:07:50.975 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:50.975 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.975 #undef SPDK_CONFIG_FC 00:07:50.975 #define SPDK_CONFIG_FC_PATH 00:07:50.975 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.975 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.975 #undef SPDK_CONFIG_FUSE 00:07:50.975 #undef SPDK_CONFIG_FUZZER 00:07:50.975 #define SPDK_CONFIG_FUZZER_LIB 00:07:50.975 #undef SPDK_CONFIG_GOLANG 00:07:50.975 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.975 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:50.975 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.975 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:50.975 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.975 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.975 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.975 #define SPDK_CONFIG_IDXD 1 00:07:50.975 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.975 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.975 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.975 #define SPDK_CONFIG_ISAL 1 00:07:50.975 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.975 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.975 #define SPDK_CONFIG_LIBDIR 00:07:50.975 #undef SPDK_CONFIG_LTO 00:07:50.976 #define SPDK_CONFIG_MAX_LCORES 128 00:07:50.976 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.976 #undef SPDK_CONFIG_OCF 00:07:50.976 #define SPDK_CONFIG_OCF_PATH 00:07:50.976 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.976 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.976 #define SPDK_CONFIG_PGO_DIR 00:07:50.976 #undef SPDK_CONFIG_PGO_USE 00:07:50.976 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.976 #undef SPDK_CONFIG_RAID5F 00:07:50.976 #undef SPDK_CONFIG_RBD 00:07:50.976 #define SPDK_CONFIG_RDMA 1 00:07:50.976 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.976 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.976 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.976 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.976 #define SPDK_CONFIG_SHARED 1 00:07:50.976 #undef SPDK_CONFIG_SMA 00:07:50.976 #define SPDK_CONFIG_TESTS 1 00:07:50.976 #undef SPDK_CONFIG_TSAN 00:07:50.976 #define SPDK_CONFIG_UBLK 1 00:07:50.976 #define SPDK_CONFIG_UBSAN 1 00:07:50.976 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.976 #undef SPDK_CONFIG_URING 00:07:50.976 #define SPDK_CONFIG_URING_PATH 00:07:50.976 #undef SPDK_CONFIG_URING_ZNS 00:07:50.976 #undef SPDK_CONFIG_USDT 00:07:50.976 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.976 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.976 #define SPDK_CONFIG_VFIO_USER 1 00:07:50.976 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.976 #define SPDK_CONFIG_VHOST 1 00:07:50.976 #define SPDK_CONFIG_VIRTIO 1 00:07:50.976 #undef SPDK_CONFIG_VTUNE 00:07:50.976 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.976 #define SPDK_CONFIG_WERROR 1 00:07:50.976 #define SPDK_CONFIG_WPDK_DIR 00:07:50.976 #undef SPDK_CONFIG_XNVME 00:07:50.976 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:50.976 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.977 13:53:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:50.977 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1170915 ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1170915 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.h4VNNh 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.h4VNNh/tests/target /tmp/spdk.h4VNNh 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953012224 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4331417600 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122883346432 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370992640 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6487646208 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64682119168 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864273920 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9924608 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=353280 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=150528 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684941312 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685498368 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=557056 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:50.978 * Looking for test storage... 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122883346432 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8702238720 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.978 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.979 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.240 13:53:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:59.381 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:59.381 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.381 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:59.382 Found net devices under 0000:31:00.0: cvl_0_0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:59.382 Found net devices under 0000:31:00.1: cvl_0_1 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:07:59.382 00:07:59.382 --- 10.0.0.2 ping statistics --- 00:07:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.382 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:07:59.382 00:07:59.382 --- 10.0.0.1 ping statistics --- 00:07:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.382 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.382 ************************************ 00:07:59.382 START TEST nvmf_filesystem_no_in_capsule 00:07:59.382 ************************************ 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1175219 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1175219 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1175219 ']' 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.382 13:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.642 [2024-07-15 13:53:57.516809] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:59.642 [2024-07-15 13:53:57.516855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.642 [2024-07-15 13:53:57.590136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.642 [2024-07-15 13:53:57.657071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.642 [2024-07-15 13:53:57.657108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.643 [2024-07-15 13:53:57.657116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.643 [2024-07-15 13:53:57.657123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.643 [2024-07-15 13:53:57.657128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.643 [2024-07-15 13:53:57.657268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.643 [2024-07-15 13:53:57.657380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.643 [2024-07-15 13:53:57.657536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.643 [2024-07-15 13:53:57.657537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.214 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.214 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:00.214 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.214 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.214 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 [2024-07-15 13:53:58.334448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 Malloc1 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.474 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 [2024-07-15 13:53:58.463185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:00.475 { 00:08:00.475 "name": "Malloc1", 00:08:00.475 "aliases": [ 00:08:00.475 "d6bbd15f-f320-48d8-8be5-ce1096874a1c" 00:08:00.475 ], 00:08:00.475 "product_name": "Malloc disk", 00:08:00.475 "block_size": 512, 00:08:00.475 "num_blocks": 1048576, 00:08:00.475 "uuid": "d6bbd15f-f320-48d8-8be5-ce1096874a1c", 00:08:00.475 "assigned_rate_limits": { 00:08:00.475 "rw_ios_per_sec": 0, 00:08:00.475 "rw_mbytes_per_sec": 0, 00:08:00.475 "r_mbytes_per_sec": 0, 00:08:00.475 "w_mbytes_per_sec": 0 00:08:00.475 }, 00:08:00.475 "claimed": true, 00:08:00.475 "claim_type": "exclusive_write", 00:08:00.475 "zoned": false, 00:08:00.475 "supported_io_types": { 00:08:00.475 "read": true, 00:08:00.475 "write": true, 00:08:00.475 "unmap": true, 00:08:00.475 "flush": true, 00:08:00.475 "reset": true, 00:08:00.475 "nvme_admin": false, 00:08:00.475 "nvme_io": false, 00:08:00.475 "nvme_io_md": false, 00:08:00.475 "write_zeroes": true, 00:08:00.475 "zcopy": true, 00:08:00.475 "get_zone_info": false, 00:08:00.475 "zone_management": false, 00:08:00.475 "zone_append": false, 00:08:00.475 "compare": false, 00:08:00.475 "compare_and_write": false, 00:08:00.475 "abort": true, 00:08:00.475 "seek_hole": false, 00:08:00.475 "seek_data": false, 00:08:00.475 "copy": true, 00:08:00.475 "nvme_iov_md": false 00:08:00.475 }, 00:08:00.475 "memory_domains": [ 00:08:00.475 { 00:08:00.475 "dma_device_id": "system", 00:08:00.475 "dma_device_type": 1 00:08:00.475 }, 00:08:00.475 { 00:08:00.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.475 "dma_device_type": 2 00:08:00.475 } 00:08:00.475 ], 00:08:00.475 "driver_specific": {} 00:08:00.475 } 00:08:00.475 ]' 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.475 13:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.428 13:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.428 13:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.428 13:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.428 13:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.428 13:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:04.341 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:04.342 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:04.342 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:04.342 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:04.602 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:04.863 13:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.872 13:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:05.872 13:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.872 13:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.872 13:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.872 13:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.133 ************************************ 00:08:06.133 START TEST filesystem_ext4 00:08:06.133 ************************************ 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:06.133 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.133 mke2fs 1.46.5 (30-Dec-2021) 00:08:06.133 Discarding device blocks: 0/522240 done 00:08:06.133 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:06.133 Filesystem UUID: af573cc9-81f4-49c8-9849-b34cc2c04bbb 00:08:06.133 Superblock backups stored on blocks: 00:08:06.133 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:06.133 00:08:06.133 Allocating group tables: 0/64 done 00:08:06.133 Writing inode tables: 0/64 done 00:08:06.394 Creating journal (8192 blocks): done 00:08:06.394 Writing superblocks and filesystem accounting information: 0/64 done 00:08:06.394 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1175219 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.394 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.655 00:08:06.655 real 0m0.507s 00:08:06.655 user 0m0.025s 00:08:06.655 sys 0m0.043s 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:06.655 ************************************ 00:08:06.655 END TEST filesystem_ext4 00:08:06.655 ************************************ 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.655 ************************************ 00:08:06.655 START TEST filesystem_btrfs 00:08:06.655 ************************************ 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:06.655 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:06.656 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.915 btrfs-progs v6.6.2 00:08:06.915 See https://btrfs.readthedocs.io for more information. 00:08:06.915 00:08:06.915 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.915 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.915 this does not affect your deployments: 00:08:06.915 - DUP for metadata (-m dup) 00:08:06.915 - enabled no-holes (-O no-holes) 00:08:06.915 - enabled free-space-tree (-R free-space-tree) 00:08:06.915 00:08:06.915 Label: (null) 00:08:06.915 UUID: 78fde7bf-a3eb-4211-b1f0-2d50b395af6c 00:08:06.915 Node size: 16384 00:08:06.915 Sector size: 4096 00:08:06.915 Filesystem size: 510.00MiB 00:08:06.915 Block group profiles: 00:08:06.915 Data: single 8.00MiB 00:08:06.915 Metadata: DUP 32.00MiB 00:08:06.915 System: DUP 8.00MiB 00:08:06.915 SSD detected: yes 00:08:06.915 Zoned device: no 00:08:06.915 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.915 Runtime features: free-space-tree 00:08:06.915 Checksum: crc32c 00:08:06.915 Number of devices: 1 00:08:06.915 Devices: 00:08:06.915 ID SIZE PATH 00:08:06.915 1 510.00MiB /dev/nvme0n1p1 00:08:06.915 00:08:06.915 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.915 13:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1175219 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.856 00:08:07.856 real 0m1.112s 00:08:07.856 user 0m0.023s 00:08:07.856 sys 0m0.063s 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.856 ************************************ 00:08:07.856 END TEST filesystem_btrfs 00:08:07.856 ************************************ 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.856 ************************************ 00:08:07.856 START TEST filesystem_xfs 00:08:07.856 ************************************ 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:07.856 13:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:08.116 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:08.117 = sectsz=512 attr=2, projid32bit=1 00:08:08.117 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:08.117 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:08.117 data = bsize=4096 blocks=130560, imaxpct=25 00:08:08.117 = sunit=0 swidth=0 blks 00:08:08.117 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:08.117 log =internal log bsize=4096 blocks=16384, version=2 00:08:08.117 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:08.117 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:09.501 Discarding blocks...Done. 00:08:09.501 13:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.501 13:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.884 13:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.884 13:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:10.884 13:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.144 13:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1175219 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.144 00:08:11.144 real 0m3.252s 00:08:11.144 user 0m0.025s 00:08:11.144 sys 0m0.056s 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:11.144 ************************************ 00:08:11.144 END TEST filesystem_xfs 00:08:11.144 ************************************ 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:11.144 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:11.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1175219 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1175219 ']' 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1175219 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175219 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175219' 00:08:11.405 killing process with pid 1175219 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1175219 00:08:11.405 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1175219 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:11.666 00:08:11.666 real 0m12.166s 00:08:11.666 user 0m47.930s 00:08:11.666 sys 0m1.024s 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.666 ************************************ 00:08:11.666 END TEST nvmf_filesystem_no_in_capsule 00:08:11.666 ************************************ 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.666 ************************************ 00:08:11.666 START TEST nvmf_filesystem_in_capsule 00:08:11.666 ************************************ 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.666 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1178153 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1178153 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1178153 ']' 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.667 13:54:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 [2024-07-15 13:54:09.769911] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:11.667 [2024-07-15 13:54:09.769967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.927 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.927 [2024-07-15 13:54:09.847915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.927 [2024-07-15 13:54:09.923056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.927 [2024-07-15 13:54:09.923094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.927 [2024-07-15 13:54:09.923102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.927 [2024-07-15 13:54:09.923108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.927 [2024-07-15 13:54:09.923113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.927 [2024-07-15 13:54:09.923255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.927 [2024-07-15 13:54:09.923379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.927 [2024-07-15 13:54:09.923535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.927 [2024-07-15 13:54:09.923536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.498 [2024-07-15 13:54:10.597347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.498 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 Malloc1 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 [2024-07-15 13:54:10.728001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:12.759 { 00:08:12.759 "name": "Malloc1", 00:08:12.759 "aliases": [ 00:08:12.759 "1add9abb-02ce-4ef7-af88-410a02abe5ba" 00:08:12.759 ], 00:08:12.759 "product_name": "Malloc disk", 00:08:12.759 "block_size": 512, 00:08:12.759 "num_blocks": 1048576, 00:08:12.759 "uuid": "1add9abb-02ce-4ef7-af88-410a02abe5ba", 00:08:12.759 "assigned_rate_limits": { 00:08:12.759 "rw_ios_per_sec": 0, 00:08:12.759 "rw_mbytes_per_sec": 0, 00:08:12.759 "r_mbytes_per_sec": 0, 00:08:12.759 "w_mbytes_per_sec": 0 00:08:12.759 }, 00:08:12.759 "claimed": true, 00:08:12.759 "claim_type": "exclusive_write", 00:08:12.759 "zoned": false, 00:08:12.759 "supported_io_types": { 00:08:12.759 "read": true, 00:08:12.759 "write": true, 00:08:12.759 "unmap": true, 00:08:12.759 "flush": true, 00:08:12.759 "reset": true, 00:08:12.759 "nvme_admin": false, 00:08:12.759 "nvme_io": false, 00:08:12.759 "nvme_io_md": false, 00:08:12.759 "write_zeroes": true, 00:08:12.759 "zcopy": true, 00:08:12.759 "get_zone_info": false, 00:08:12.759 "zone_management": false, 00:08:12.759 "zone_append": false, 00:08:12.759 "compare": false, 00:08:12.759 "compare_and_write": false, 00:08:12.759 "abort": true, 00:08:12.759 "seek_hole": false, 00:08:12.759 "seek_data": false, 00:08:12.759 "copy": true, 00:08:12.759 "nvme_iov_md": false 00:08:12.759 }, 00:08:12.759 "memory_domains": [ 00:08:12.759 { 00:08:12.759 "dma_device_id": "system", 00:08:12.759 "dma_device_type": 1 00:08:12.759 }, 00:08:12.759 { 00:08:12.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.759 "dma_device_type": 2 00:08:12.759 } 00:08:12.759 ], 00:08:12.759 "driver_specific": {} 00:08:12.759 } 00:08:12.759 ]' 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.759 13:54:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.671 13:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.671 13:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:14.671 13:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.671 13:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:14.671 13:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:16.590 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:16.850 13:54:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:17.111 13:54:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.049 ************************************ 00:08:18.049 START TEST filesystem_in_capsule_ext4 00:08:18.049 ************************************ 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:18.049 13:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:18.049 mke2fs 1.46.5 (30-Dec-2021) 00:08:18.309 Discarding device blocks: 0/522240 done 00:08:18.309 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:18.309 Filesystem UUID: 9e9e6cf8-4adf-4ada-8223-92ce0ded52d7 00:08:18.309 Superblock backups stored on blocks: 00:08:18.309 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:18.309 00:08:18.309 Allocating group tables: 0/64 done 00:08:18.309 Writing inode tables: 0/64 done 00:08:18.568 Creating journal (8192 blocks): done 00:08:19.397 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:19.397 00:08:19.397 13:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:19.397 13:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1178153 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.338 00:08:20.338 real 0m2.234s 00:08:20.338 user 0m0.023s 00:08:20.338 sys 0m0.055s 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:20.338 ************************************ 00:08:20.338 END TEST filesystem_in_capsule_ext4 00:08:20.338 ************************************ 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.338 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.599 ************************************ 00:08:20.599 START TEST filesystem_in_capsule_btrfs 00:08:20.599 ************************************ 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:20.599 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.860 btrfs-progs v6.6.2 00:08:20.860 See https://btrfs.readthedocs.io for more information. 00:08:20.860 00:08:20.860 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.860 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.860 this does not affect your deployments: 00:08:20.860 - DUP for metadata (-m dup) 00:08:20.860 - enabled no-holes (-O no-holes) 00:08:20.860 - enabled free-space-tree (-R free-space-tree) 00:08:20.860 00:08:20.860 Label: (null) 00:08:20.860 UUID: 2bb336dd-7a37-48ab-992c-976ede5e7f82 00:08:20.860 Node size: 16384 00:08:20.860 Sector size: 4096 00:08:20.860 Filesystem size: 510.00MiB 00:08:20.860 Block group profiles: 00:08:20.860 Data: single 8.00MiB 00:08:20.860 Metadata: DUP 32.00MiB 00:08:20.860 System: DUP 8.00MiB 00:08:20.860 SSD detected: yes 00:08:20.860 Zoned device: no 00:08:20.860 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.860 Runtime features: free-space-tree 00:08:20.860 Checksum: crc32c 00:08:20.860 Number of devices: 1 00:08:20.860 Devices: 00:08:20.860 ID SIZE PATH 00:08:20.860 1 510.00MiB /dev/nvme0n1p1 00:08:20.860 00:08:20.860 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:20.860 13:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.120 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.120 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:21.120 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.120 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:21.120 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1178153 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.381 00:08:21.381 real 0m0.803s 00:08:21.381 user 0m0.016s 00:08:21.381 sys 0m0.069s 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 ************************************ 00:08:21.381 END TEST filesystem_in_capsule_btrfs 00:08:21.381 ************************************ 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 ************************************ 00:08:21.381 START TEST filesystem_in_capsule_xfs 00:08:21.381 ************************************ 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:21.381 13:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.381 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.381 = sectsz=512 attr=2, projid32bit=1 00:08:21.381 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.381 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.381 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.381 = sunit=0 swidth=0 blks 00:08:21.381 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.381 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.381 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.381 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:22.324 Discarding blocks...Done. 00:08:22.324 13:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:22.324 13:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1178153 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.236 00:08:24.236 real 0m2.948s 00:08:24.236 user 0m0.020s 00:08:24.236 sys 0m0.058s 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 ************************************ 00:08:24.236 END TEST filesystem_in_capsule_xfs 00:08:24.236 ************************************ 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:24.236 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:24.808 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1178153 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1178153 ']' 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1178153 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178153 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178153' 00:08:24.809 killing process with pid 1178153 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1178153 00:08:24.809 13:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1178153 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.070 00:08:25.070 real 0m13.424s 00:08:25.070 user 0m52.867s 00:08:25.070 sys 0m1.075s 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.070 ************************************ 00:08:25.070 END TEST nvmf_filesystem_in_capsule 00:08:25.070 ************************************ 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.070 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.070 rmmod nvme_tcp 00:08:25.331 rmmod nvme_fabrics 00:08:25.331 rmmod nvme_keyring 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.331 13:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.242 13:54:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.242 00:08:27.242 real 0m36.508s 00:08:27.242 user 1m43.312s 00:08:27.242 sys 0m8.403s 00:08:27.242 13:54:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.242 13:54:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.242 ************************************ 00:08:27.242 END TEST nvmf_filesystem 00:08:27.242 ************************************ 00:08:27.503 13:54:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.504 13:54:25 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:27.504 13:54:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.504 13:54:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.504 13:54:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.504 ************************************ 00:08:27.504 START TEST nvmf_target_discovery 00:08:27.504 ************************************ 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:27.504 * Looking for test storage... 00:08:27.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.504 13:54:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:35.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.709 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:35.710 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:35.710 Found net devices under 0000:31:00.0: cvl_0_0 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:35.710 Found net devices under 0000:31:00.1: cvl_0_1 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:08:35.710 00:08:35.710 --- 10.0.0.2 ping statistics --- 00:08:35.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.710 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:08:35.710 00:08:35.710 --- 10.0.0.1 ping statistics --- 00:08:35.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.710 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.710 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1185940 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1185940 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1185940 ']' 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.711 13:54:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.711 [2024-07-15 13:54:33.601000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:35.711 [2024-07-15 13:54:33.601049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.711 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.711 [2024-07-15 13:54:33.679343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.711 [2024-07-15 13:54:33.745428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.711 [2024-07-15 13:54:33.745467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.711 [2024-07-15 13:54:33.745475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.711 [2024-07-15 13:54:33.745481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.711 [2024-07-15 13:54:33.745487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.711 [2024-07-15 13:54:33.745659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.711 [2024-07-15 13:54:33.745775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.711 [2024-07-15 13:54:33.745879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.711 [2024-07-15 13:54:33.745880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.282 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.282 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:36.282 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.282 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.282 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.542 [2024-07-15 13:54:34.411356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.542 Null1 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.542 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 [2024-07-15 13:54:34.471675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 Null2 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 Null3 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 Null4 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.543 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:08:36.804 00:08:36.804 Discovery Log Number of Records 6, Generation counter 6 00:08:36.804 =====Discovery Log Entry 0====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: current discovery subsystem 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4420 00:08:36.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: explicit discovery connections, duplicate discovery information 00:08:36.804 sectype: none 00:08:36.804 =====Discovery Log Entry 1====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: nvme subsystem 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4420 00:08:36.804 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: none 00:08:36.804 sectype: none 00:08:36.804 =====Discovery Log Entry 2====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: nvme subsystem 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4420 00:08:36.804 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: none 00:08:36.804 sectype: none 00:08:36.804 =====Discovery Log Entry 3====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: nvme subsystem 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4420 00:08:36.804 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: none 00:08:36.804 sectype: none 00:08:36.804 =====Discovery Log Entry 4====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: nvme subsystem 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4420 00:08:36.804 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: none 00:08:36.804 sectype: none 00:08:36.804 =====Discovery Log Entry 5====== 00:08:36.804 trtype: tcp 00:08:36.804 adrfam: ipv4 00:08:36.804 subtype: discovery subsystem referral 00:08:36.804 treq: not required 00:08:36.804 portid: 0 00:08:36.804 trsvcid: 4430 00:08:36.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:36.804 traddr: 10.0.0.2 00:08:36.804 eflags: none 00:08:36.804 sectype: none 00:08:36.804 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:36.804 Perform nvmf subsystem discovery via RPC 00:08:36.804 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:36.804 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.804 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.804 [ 00:08:36.804 { 00:08:36.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:36.804 "subtype": "Discovery", 00:08:36.804 "listen_addresses": [ 00:08:36.804 { 00:08:36.804 "trtype": "TCP", 00:08:36.804 "adrfam": "IPv4", 00:08:36.804 "traddr": "10.0.0.2", 00:08:36.804 "trsvcid": "4420" 00:08:36.804 } 00:08:36.804 ], 00:08:36.804 "allow_any_host": true, 00:08:36.804 "hosts": [] 00:08:36.804 }, 00:08:36.804 { 00:08:36.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.804 "subtype": "NVMe", 00:08:36.805 "listen_addresses": [ 00:08:36.805 { 00:08:36.805 "trtype": "TCP", 00:08:36.805 "adrfam": "IPv4", 00:08:36.805 "traddr": "10.0.0.2", 00:08:36.805 "trsvcid": "4420" 00:08:36.805 } 00:08:36.805 ], 00:08:36.805 "allow_any_host": true, 00:08:36.805 "hosts": [], 00:08:36.805 "serial_number": "SPDK00000000000001", 00:08:36.805 "model_number": "SPDK bdev Controller", 00:08:36.805 "max_namespaces": 32, 00:08:36.805 "min_cntlid": 1, 00:08:36.805 "max_cntlid": 65519, 00:08:36.805 "namespaces": [ 00:08:36.805 { 00:08:36.805 "nsid": 1, 00:08:36.805 "bdev_name": "Null1", 00:08:36.805 "name": "Null1", 00:08:36.805 "nguid": "A504D75A54964D229D639F1AB9095756", 00:08:36.805 "uuid": "a504d75a-5496-4d22-9d63-9f1ab9095756" 00:08:36.805 } 00:08:36.805 ] 00:08:36.805 }, 00:08:36.805 { 00:08:36.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:36.805 "subtype": "NVMe", 00:08:36.805 "listen_addresses": [ 00:08:36.805 { 00:08:36.805 "trtype": "TCP", 00:08:36.805 "adrfam": "IPv4", 00:08:36.805 "traddr": "10.0.0.2", 00:08:36.805 "trsvcid": "4420" 00:08:36.805 } 00:08:36.805 ], 00:08:36.805 "allow_any_host": true, 00:08:36.805 "hosts": [], 00:08:36.805 "serial_number": "SPDK00000000000002", 00:08:36.805 "model_number": "SPDK bdev Controller", 00:08:36.805 "max_namespaces": 32, 00:08:36.805 "min_cntlid": 1, 00:08:36.805 "max_cntlid": 65519, 00:08:36.805 "namespaces": [ 00:08:36.805 { 00:08:36.805 "nsid": 1, 00:08:36.805 "bdev_name": "Null2", 00:08:36.805 "name": "Null2", 00:08:36.805 "nguid": "A7966C9ED7034F8483AA49725886319C", 00:08:36.805 "uuid": "a7966c9e-d703-4f84-83aa-49725886319c" 00:08:36.805 } 00:08:36.805 ] 00:08:36.805 }, 00:08:36.805 { 00:08:36.805 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:36.805 "subtype": "NVMe", 00:08:36.805 "listen_addresses": [ 00:08:36.805 { 00:08:36.805 "trtype": "TCP", 00:08:36.805 "adrfam": "IPv4", 00:08:36.805 "traddr": "10.0.0.2", 00:08:36.805 "trsvcid": "4420" 00:08:36.805 } 00:08:36.805 ], 00:08:36.805 "allow_any_host": true, 00:08:36.805 "hosts": [], 00:08:36.805 "serial_number": "SPDK00000000000003", 00:08:36.805 "model_number": "SPDK bdev Controller", 00:08:36.805 "max_namespaces": 32, 00:08:36.805 "min_cntlid": 1, 00:08:36.805 "max_cntlid": 65519, 00:08:36.805 "namespaces": [ 00:08:36.805 { 00:08:36.805 "nsid": 1, 00:08:36.805 "bdev_name": "Null3", 00:08:36.805 "name": "Null3", 00:08:36.805 "nguid": "DAC41A9156934E6385CCB41CCE1028B4", 00:08:36.805 "uuid": "dac41a91-5693-4e63-85cc-b41cce1028b4" 00:08:36.805 } 00:08:36.805 ] 00:08:36.805 }, 00:08:36.805 { 00:08:36.805 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:36.805 "subtype": "NVMe", 00:08:36.805 "listen_addresses": [ 00:08:36.805 { 00:08:36.805 "trtype": "TCP", 00:08:36.805 "adrfam": "IPv4", 00:08:36.805 "traddr": "10.0.0.2", 00:08:36.805 "trsvcid": "4420" 00:08:36.805 } 00:08:36.805 ], 00:08:36.805 "allow_any_host": true, 00:08:36.805 "hosts": [], 00:08:36.805 "serial_number": "SPDK00000000000004", 00:08:36.805 "model_number": "SPDK bdev Controller", 00:08:36.805 "max_namespaces": 32, 00:08:36.805 "min_cntlid": 1, 00:08:36.805 "max_cntlid": 65519, 00:08:36.805 "namespaces": [ 00:08:36.805 { 00:08:36.805 "nsid": 1, 00:08:36.805 "bdev_name": "Null4", 00:08:36.805 "name": "Null4", 00:08:36.805 "nguid": "FF91C426439941238AEF5435FB731100", 00:08:36.805 "uuid": "ff91c426-4399-4123-8aef-5435fb731100" 00:08:36.805 } 00:08:36.805 ] 00:08:36.805 } 00:08:36.805 ] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.805 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.066 rmmod nvme_tcp 00:08:37.066 rmmod nvme_fabrics 00:08:37.066 rmmod nvme_keyring 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1185940 ']' 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1185940 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1185940 ']' 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1185940 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.066 13:54:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1185940 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1185940' 00:08:37.066 killing process with pid 1185940 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1185940 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1185940 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.066 13:54:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.637 13:54:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.637 00:08:39.637 real 0m11.816s 00:08:39.637 user 0m8.063s 00:08:39.637 sys 0m6.243s 00:08:39.637 13:54:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.637 13:54:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:39.637 ************************************ 00:08:39.637 END TEST nvmf_target_discovery 00:08:39.637 ************************************ 00:08:39.637 13:54:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.637 13:54:37 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:39.637 13:54:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.637 13:54:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.637 13:54:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.637 ************************************ 00:08:39.637 START TEST nvmf_referrals 00:08:39.637 ************************************ 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:39.637 * Looking for test storage... 00:08:39.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.637 13:54:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:47.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:47.778 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:47.778 Found net devices under 0000:31:00.0: cvl_0_0 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:47.778 Found net devices under 0000:31:00.1: cvl_0_1 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:08:47.778 00:08:47.778 --- 10.0.0.2 ping statistics --- 00:08:47.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.778 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:47.778 00:08:47.778 --- 10.0.0.1 ping statistics --- 00:08:47.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.778 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1190989 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1190989 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1190989 ']' 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.778 13:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.778 [2024-07-15 13:54:45.732166] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:47.778 [2024-07-15 13:54:45.732228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.778 [2024-07-15 13:54:45.813600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.778 [2024-07-15 13:54:45.889093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.778 [2024-07-15 13:54:45.889132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.778 [2024-07-15 13:54:45.889140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.778 [2024-07-15 13:54:45.889146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.778 [2024-07-15 13:54:45.889152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.778 [2024-07-15 13:54:45.889290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.778 [2024-07-15 13:54:45.889409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.778 [2024-07-15 13:54:45.889567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.778 [2024-07-15 13:54:45.889568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-07-15 13:54:46.563406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-07-15 13:54:46.579570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.719 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.981 13:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.981 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.241 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.242 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.502 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.763 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.024 13:54:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.025 rmmod nvme_tcp 00:08:50.025 rmmod nvme_fabrics 00:08:50.025 rmmod nvme_keyring 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1190989 ']' 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1190989 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1190989 ']' 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1190989 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.025 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1190989 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1190989' 00:08:50.287 killing process with pid 1190989 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1190989 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1190989 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.287 13:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.834 13:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.834 00:08:52.834 real 0m13.058s 00:08:52.834 user 0m12.915s 00:08:52.834 sys 0m6.557s 00:08:52.834 13:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.834 13:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:52.834 ************************************ 00:08:52.834 END TEST nvmf_referrals 00:08:52.834 ************************************ 00:08:52.834 13:54:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.834 13:54:50 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:52.834 13:54:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.834 13:54:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.834 13:54:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.834 ************************************ 00:08:52.834 START TEST nvmf_connect_disconnect 00:08:52.834 ************************************ 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:52.834 * Looking for test storage... 00:08:52.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.834 13:54:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:00.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:00.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:00.977 Found net devices under 0000:31:00.0: cvl_0_0 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:00.977 Found net devices under 0000:31:00.1: cvl_0_1 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.977 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:09:00.978 00:09:00.978 --- 10.0.0.2 ping statistics --- 00:09:00.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.978 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:00.978 00:09:00.978 --- 10.0.0.1 ping statistics --- 00:09:00.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.978 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1196127 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1196127 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1196127 ']' 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.978 13:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.978 [2024-07-15 13:54:58.602796] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:00.978 [2024-07-15 13:54:58.602837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.978 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.978 [2024-07-15 13:54:58.665792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.978 [2024-07-15 13:54:58.730878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.978 [2024-07-15 13:54:58.730911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.978 [2024-07-15 13:54:58.730918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.978 [2024-07-15 13:54:58.730925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.978 [2024-07-15 13:54:58.730930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.978 [2024-07-15 13:54:58.731068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.978 [2024-07-15 13:54:58.731203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.978 [2024-07-15 13:54:58.731360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.978 [2024-07-15 13:54:58.731362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 [2024-07-15 13:54:59.443472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:01.561 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.562 [2024-07-15 13:54:59.502691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:01.562 13:54:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:05.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.003 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:20.003 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:20.003 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.003 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:20.003 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.004 rmmod nvme_tcp 00:09:20.004 rmmod nvme_fabrics 00:09:20.004 rmmod nvme_keyring 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1196127 ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1196127 ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1196127' 00:09:20.004 killing process with pid 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1196127 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.004 13:55:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.916 13:55:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.916 00:09:21.916 real 0m29.346s 00:09:21.916 user 1m17.903s 00:09:21.916 sys 0m6.919s 00:09:21.916 13:55:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.916 13:55:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.916 ************************************ 00:09:21.916 END TEST nvmf_connect_disconnect 00:09:21.916 ************************************ 00:09:21.916 13:55:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:21.916 13:55:19 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:21.916 13:55:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.916 13:55:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.916 13:55:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.916 ************************************ 00:09:21.916 START TEST nvmf_multitarget 00:09:21.916 ************************************ 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:21.916 * Looking for test storage... 00:09:21.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.916 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.917 13:55:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:30.054 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:30.054 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:30.054 Found net devices under 0000:31:00.0: cvl_0_0 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:30.054 Found net devices under 0000:31:00.1: cvl_0_1 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:09:30.054 00:09:30.054 --- 10.0.0.2 ping statistics --- 00:09:30.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.054 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:09:30.054 00:09:30.054 --- 10.0.0.1 ping statistics --- 00:09:30.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.054 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.054 13:55:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1204721 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1204721 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1204721 ']' 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.054 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.054 [2024-07-15 13:55:28.063064] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:30.054 [2024-07-15 13:55:28.063121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.054 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.054 [2024-07-15 13:55:28.141478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.313 [2024-07-15 13:55:28.207656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.313 [2024-07-15 13:55:28.207691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.313 [2024-07-15 13:55:28.207699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.314 [2024-07-15 13:55:28.207705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.314 [2024-07-15 13:55:28.207711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.314 [2024-07-15 13:55:28.207794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.314 [2024-07-15 13:55:28.207994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.314 [2024-07-15 13:55:28.207995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.314 [2024-07-15 13:55:28.207869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:30.899 13:55:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:31.159 "nvmf_tgt_1" 00:09:31.159 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:31.159 "nvmf_tgt_2" 00:09:31.159 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:31.159 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:31.159 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:31.159 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:31.419 true 00:09:31.419 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:31.419 true 00:09:31.419 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:31.419 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.680 rmmod nvme_tcp 00:09:31.680 rmmod nvme_fabrics 00:09:31.680 rmmod nvme_keyring 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1204721 ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1204721 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1204721 ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1204721 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1204721 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1204721' 00:09:31.680 killing process with pid 1204721 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1204721 00:09:31.680 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1204721 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.941 13:55:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.860 13:55:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.860 00:09:33.860 real 0m12.025s 00:09:33.860 user 0m9.442s 00:09:33.860 sys 0m6.309s 00:09:33.860 13:55:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.860 13:55:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:33.860 ************************************ 00:09:33.860 END TEST nvmf_multitarget 00:09:33.860 ************************************ 00:09:33.860 13:55:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.860 13:55:31 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:33.860 13:55:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.860 13:55:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.860 13:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 ************************************ 00:09:34.125 START TEST nvmf_rpc 00:09:34.125 ************************************ 00:09:34.125 13:55:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:34.125 * Looking for test storage... 00:09:34.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.125 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.126 13:55:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:42.270 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:42.270 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:42.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:42.271 Found net devices under 0000:31:00.0: cvl_0_0 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:42.271 Found net devices under 0000:31:00.1: cvl_0_1 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.271 13:55:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:09:42.271 00:09:42.271 --- 10.0.0.2 ping statistics --- 00:09:42.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.271 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:09:42.271 00:09:42.271 --- 10.0.0.1 ping statistics --- 00:09:42.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.271 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1209744 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1209744 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1209744 ']' 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.271 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 [2024-07-15 13:55:40.205091] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:42.271 [2024-07-15 13:55:40.205153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.272 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.272 [2024-07-15 13:55:40.287048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.272 [2024-07-15 13:55:40.363398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.272 [2024-07-15 13:55:40.363437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.272 [2024-07-15 13:55:40.363445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.272 [2024-07-15 13:55:40.363452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.272 [2024-07-15 13:55:40.363457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.272 [2024-07-15 13:55:40.363595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.272 [2024-07-15 13:55:40.363727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.272 [2024-07-15 13:55:40.363889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.272 [2024-07-15 13:55:40.363889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.213 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.213 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:43.213 13:55:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.213 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:43.213 13:55:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:43.213 "tick_rate": 2400000000, 00:09:43.213 "poll_groups": [ 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_000", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_001", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_002", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_003", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [] 00:09:43.213 } 00:09:43.213 ] 00:09:43.213 }' 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.213 [2024-07-15 13:55:41.150694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.213 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:43.213 "tick_rate": 2400000000, 00:09:43.213 "poll_groups": [ 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_000", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [ 00:09:43.213 { 00:09:43.213 "trtype": "TCP" 00:09:43.213 } 00:09:43.213 ] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_001", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [ 00:09:43.213 { 00:09:43.213 "trtype": "TCP" 00:09:43.213 } 00:09:43.213 ] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_002", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.213 "transports": [ 00:09:43.213 { 00:09:43.213 "trtype": "TCP" 00:09:43.213 } 00:09:43.213 ] 00:09:43.213 }, 00:09:43.213 { 00:09:43.213 "name": "nvmf_tgt_poll_group_003", 00:09:43.213 "admin_qpairs": 0, 00:09:43.213 "io_qpairs": 0, 00:09:43.213 "current_admin_qpairs": 0, 00:09:43.213 "current_io_qpairs": 0, 00:09:43.213 "pending_bdev_io": 0, 00:09:43.213 "completed_nvme_io": 0, 00:09:43.214 "transports": [ 00:09:43.214 { 00:09:43.214 "trtype": "TCP" 00:09:43.214 } 00:09:43.214 ] 00:09:43.214 } 00:09:43.214 ] 00:09:43.214 }' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 Malloc1 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.214 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 [2024-07-15 13:55:41.338430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:09:43.474 [2024-07-15 13:55:41.365227] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:09:43.474 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:43.474 could not add new controller: failed to write to nvme-fabrics device 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.474 13:55:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.856 13:55:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.856 13:55:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.856 13:55:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.856 13:55:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.856 13:55:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:47.397 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:47.397 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:47.397 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.397 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:47.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:47.398 13:55:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.398 [2024-07-15 13:55:45.050702] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:09:47.398 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:47.398 could not add new controller: failed to write to nvme-fabrics device 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.398 13:55:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.784 13:55:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.784 13:55:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:48.784 13:55:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.784 13:55:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:48.784 13:55:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 [2024-07-15 13:55:48.682867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.696 13:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.609 13:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.609 13:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.609 13:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.609 13:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:52.609 13:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 [2024-07-15 13:55:52.386166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.519 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.520 13:55:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.905 13:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.905 13:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.905 13:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.905 13:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:55.905 13:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:57.816 13:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.077 13:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 [2024-07-15 13:55:56.045010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.077 13:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.460 13:55:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.460 13:55:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.460 13:55:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.460 13:55:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.460 13:55:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.002 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 [2024-07-15 13:55:59.706769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.003 13:55:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.387 13:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.387 13:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.387 13:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.387 13:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:03.387 13:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.299 [2024-07-15 13:56:03.404654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.299 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.560 13:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.946 13:56:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.946 13:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:06.946 13:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.946 13:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:06.946 13:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:08.861 13:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.123 13:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.123 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:09.123 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:09.123 13:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 [2024-07-15 13:56:07.074821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 [2024-07-15 13:56:07.134949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:09.123 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 [2024-07-15 13:56:07.199130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.124 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 [2024-07-15 13:56:07.259324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 [2024-07-15 13:56:07.319505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:09.386 "tick_rate": 2400000000, 00:10:09.386 "poll_groups": [ 00:10:09.386 { 00:10:09.386 "name": "nvmf_tgt_poll_group_000", 00:10:09.386 "admin_qpairs": 0, 00:10:09.386 "io_qpairs": 224, 00:10:09.386 "current_admin_qpairs": 0, 00:10:09.386 "current_io_qpairs": 0, 00:10:09.386 "pending_bdev_io": 0, 00:10:09.386 "completed_nvme_io": 230, 00:10:09.386 "transports": [ 00:10:09.386 { 00:10:09.386 "trtype": "TCP" 00:10:09.386 } 00:10:09.386 ] 00:10:09.386 }, 00:10:09.386 { 00:10:09.386 "name": "nvmf_tgt_poll_group_001", 00:10:09.386 "admin_qpairs": 1, 00:10:09.386 "io_qpairs": 223, 00:10:09.386 "current_admin_qpairs": 0, 00:10:09.386 "current_io_qpairs": 0, 00:10:09.386 "pending_bdev_io": 0, 00:10:09.386 "completed_nvme_io": 223, 00:10:09.386 "transports": [ 00:10:09.386 { 00:10:09.386 "trtype": "TCP" 00:10:09.386 } 00:10:09.386 ] 00:10:09.386 }, 00:10:09.386 { 00:10:09.386 "name": "nvmf_tgt_poll_group_002", 00:10:09.386 "admin_qpairs": 6, 00:10:09.386 "io_qpairs": 218, 00:10:09.386 "current_admin_qpairs": 0, 00:10:09.386 "current_io_qpairs": 0, 00:10:09.386 "pending_bdev_io": 0, 00:10:09.386 "completed_nvme_io": 267, 00:10:09.386 "transports": [ 00:10:09.386 { 00:10:09.386 "trtype": "TCP" 00:10:09.386 } 00:10:09.386 ] 00:10:09.386 }, 00:10:09.386 { 00:10:09.386 "name": "nvmf_tgt_poll_group_003", 00:10:09.386 "admin_qpairs": 0, 00:10:09.386 "io_qpairs": 224, 00:10:09.386 "current_admin_qpairs": 0, 00:10:09.386 "current_io_qpairs": 0, 00:10:09.386 "pending_bdev_io": 0, 00:10:09.386 "completed_nvme_io": 519, 00:10:09.386 "transports": [ 00:10:09.386 { 00:10:09.386 "trtype": "TCP" 00:10:09.386 } 00:10:09.386 ] 00:10:09.386 } 00:10:09.386 ] 00:10:09.386 }' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.386 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.386 rmmod nvme_tcp 00:10:09.673 rmmod nvme_fabrics 00:10:09.673 rmmod nvme_keyring 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1209744 ']' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1209744 ']' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209744' 00:10:09.673 killing process with pid 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1209744 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.673 13:56:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.293 13:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.293 00:10:12.293 real 0m37.853s 00:10:12.293 user 1m51.757s 00:10:12.293 sys 0m7.518s 00:10:12.293 13:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.293 13:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 ************************************ 00:10:12.293 END TEST nvmf_rpc 00:10:12.293 ************************************ 00:10:12.293 13:56:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.293 13:56:09 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:12.293 13:56:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.293 13:56:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.293 13:56:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 ************************************ 00:10:12.293 START TEST nvmf_invalid 00:10:12.293 ************************************ 00:10:12.293 13:56:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:12.293 * Looking for test storage... 00:10:12.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.293 13:56:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.293 13:56:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.293 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.294 13:56:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.427 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:20.428 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:20.428 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:20.428 Found net devices under 0000:31:00.0: cvl_0_0 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:20.428 Found net devices under 0000:31:00.1: cvl_0_1 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.428 13:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:10:20.428 00:10:20.428 --- 10.0.0.2 ping statistics --- 00:10:20.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.428 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:10:20.428 00:10:20.428 --- 10.0.0.1 ping statistics --- 00:10:20.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.428 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1220020 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1220020 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1220020 ']' 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.428 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:20.428 [2024-07-15 13:56:18.219290] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:20.428 [2024-07-15 13:56:18.219353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.428 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.428 [2024-07-15 13:56:18.300885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.428 [2024-07-15 13:56:18.377246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.428 [2024-07-15 13:56:18.377287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.428 [2024-07-15 13:56:18.377296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.428 [2024-07-15 13:56:18.377302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.428 [2024-07-15 13:56:18.377308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.428 [2024-07-15 13:56:18.377446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.428 [2024-07-15 13:56:18.377565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.428 [2024-07-15 13:56:18.377721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.428 [2024-07-15 13:56:18.377722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.999 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.999 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:20.999 13:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.999 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.999 13:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 13:56:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.999 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:20.999 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2911 00:10:21.259 [2024-07-15 13:56:19.168580] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:21.259 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:21.259 { 00:10:21.259 "nqn": "nqn.2016-06.io.spdk:cnode2911", 00:10:21.259 "tgt_name": "foobar", 00:10:21.259 "method": "nvmf_create_subsystem", 00:10:21.259 "req_id": 1 00:10:21.259 } 00:10:21.259 Got JSON-RPC error response 00:10:21.259 response: 00:10:21.259 { 00:10:21.259 "code": -32603, 00:10:21.259 "message": "Unable to find target foobar" 00:10:21.259 }' 00:10:21.259 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:21.259 { 00:10:21.259 "nqn": "nqn.2016-06.io.spdk:cnode2911", 00:10:21.259 "tgt_name": "foobar", 00:10:21.259 "method": "nvmf_create_subsystem", 00:10:21.259 "req_id": 1 00:10:21.259 } 00:10:21.259 Got JSON-RPC error response 00:10:21.259 response: 00:10:21.259 { 00:10:21.259 "code": -32603, 00:10:21.259 "message": "Unable to find target foobar" 00:10:21.259 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:21.259 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:21.259 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9509 00:10:21.259 [2024-07-15 13:56:19.341152] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9509: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:21.520 { 00:10:21.520 "nqn": "nqn.2016-06.io.spdk:cnode9509", 00:10:21.520 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:21.520 "method": "nvmf_create_subsystem", 00:10:21.520 "req_id": 1 00:10:21.520 } 00:10:21.520 Got JSON-RPC error response 00:10:21.520 response: 00:10:21.520 { 00:10:21.520 "code": -32602, 00:10:21.520 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:21.520 }' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:21.520 { 00:10:21.520 "nqn": "nqn.2016-06.io.spdk:cnode9509", 00:10:21.520 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:21.520 "method": "nvmf_create_subsystem", 00:10:21.520 "req_id": 1 00:10:21.520 } 00:10:21.520 Got JSON-RPC error response 00:10:21.520 response: 00:10:21.520 { 00:10:21.520 "code": -32602, 00:10:21.520 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:21.520 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25646 00:10:21.520 [2024-07-15 13:56:19.517739] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25646: invalid model number 'SPDK_Controller' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:21.520 { 00:10:21.520 "nqn": "nqn.2016-06.io.spdk:cnode25646", 00:10:21.520 "model_number": "SPDK_Controller\u001f", 00:10:21.520 "method": "nvmf_create_subsystem", 00:10:21.520 "req_id": 1 00:10:21.520 } 00:10:21.520 Got JSON-RPC error response 00:10:21.520 response: 00:10:21.520 { 00:10:21.520 "code": -32602, 00:10:21.520 "message": "Invalid MN SPDK_Controller\u001f" 00:10:21.520 }' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:21.520 { 00:10:21.520 "nqn": "nqn.2016-06.io.spdk:cnode25646", 00:10:21.520 "model_number": "SPDK_Controller\u001f", 00:10:21.520 "method": "nvmf_create_subsystem", 00:10:21.520 "req_id": 1 00:10:21.520 } 00:10:21.520 Got JSON-RPC error response 00:10:21.520 response: 00:10:21.520 { 00:10:21.520 "code": -32602, 00:10:21.520 "message": "Invalid MN SPDK_Controller\u001f" 00:10:21.520 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.520 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.521 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\~tsX\"I:ISyjXV{wo~7M' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\~tsX\"I:ISyjXV{wo~7M' nqn.2016-06.io.spdk:cnode16638 00:10:21.782 [2024-07-15 13:56:19.854784] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16638: invalid serial number '\~tsX\"I:ISyjXV{wo~7M' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:21.782 { 00:10:21.782 "nqn": "nqn.2016-06.io.spdk:cnode16638", 00:10:21.782 "serial_number": "\\~tsX\\\"I:ISyjXV{wo~7M", 00:10:21.782 "method": "nvmf_create_subsystem", 00:10:21.782 "req_id": 1 00:10:21.782 } 00:10:21.782 Got JSON-RPC error response 00:10:21.782 response: 00:10:21.782 { 00:10:21.782 "code": -32602, 00:10:21.782 "message": "Invalid SN \\~tsX\\\"I:ISyjXV{wo~7M" 00:10:21.782 }' 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:21.782 { 00:10:21.782 "nqn": "nqn.2016-06.io.spdk:cnode16638", 00:10:21.782 "serial_number": "\\~tsX\\\"I:ISyjXV{wo~7M", 00:10:21.782 "method": "nvmf_create_subsystem", 00:10:21.782 "req_id": 1 00:10:21.782 } 00:10:21.782 Got JSON-RPC error response 00:10:21.782 response: 00:10:21.782 { 00:10:21.782 "code": -32602, 00:10:21.782 "message": "Invalid SN \\~tsX\\\"I:ISyjXV{wo~7M" 00:10:21.782 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:21.782 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:22.043 13:56:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.043 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.044 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '{>l"5K0]?UeG!Qf<^eyPimm?%t9Z@O\H0a@;1cf3' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '{>l"5K0]?UeG!Qf<^eyPimm?%t9Z@O\H0a@;1cf3' nqn.2016-06.io.spdk:cnode32157 00:10:22.305 [2024-07-15 13:56:20.336334] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32157: invalid model number '{>l"5K0]?UeG!Qf<^eyPimm?%t9Z@O\H0a@;1cf3' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:22.305 { 00:10:22.305 "nqn": "nqn.2016-06.io.spdk:cnode32157", 00:10:22.305 "model_number": "{>l\"5K0]?UeG!Qf<^eyP\u007fimm?%t9Z@O\\H0a@;1cf3", 00:10:22.305 "method": "nvmf_create_subsystem", 00:10:22.305 "req_id": 1 00:10:22.305 } 00:10:22.305 Got JSON-RPC error response 00:10:22.305 response: 00:10:22.305 { 00:10:22.305 "code": -32602, 00:10:22.305 "message": "Invalid MN {>l\"5K0]?UeG!Qf<^eyP\u007fimm?%t9Z@O\\H0a@;1cf3" 00:10:22.305 }' 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:22.305 { 00:10:22.305 "nqn": "nqn.2016-06.io.spdk:cnode32157", 00:10:22.305 "model_number": "{>l\"5K0]?UeG!Qf<^eyP\u007fimm?%t9Z@O\\H0a@;1cf3", 00:10:22.305 "method": "nvmf_create_subsystem", 00:10:22.305 "req_id": 1 00:10:22.305 } 00:10:22.305 Got JSON-RPC error response 00:10:22.305 response: 00:10:22.305 { 00:10:22.305 "code": -32602, 00:10:22.305 "message": "Invalid MN {>l\"5K0]?UeG!Qf<^eyP\u007fimm?%t9Z@O\\H0a@;1cf3" 00:10:22.305 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:22.305 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:22.566 [2024-07-15 13:56:20.508956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.566 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:22.826 [2024-07-15 13:56:20.858095] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:22.826 { 00:10:22.826 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:22.826 "listen_address": { 00:10:22.826 "trtype": "tcp", 00:10:22.826 "traddr": "", 00:10:22.826 "trsvcid": "4421" 00:10:22.826 }, 00:10:22.826 "method": "nvmf_subsystem_remove_listener", 00:10:22.826 "req_id": 1 00:10:22.826 } 00:10:22.826 Got JSON-RPC error response 00:10:22.826 response: 00:10:22.826 { 00:10:22.826 "code": -32602, 00:10:22.826 "message": "Invalid parameters" 00:10:22.826 }' 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:22.826 { 00:10:22.826 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:22.826 "listen_address": { 00:10:22.826 "trtype": "tcp", 00:10:22.826 "traddr": "", 00:10:22.826 "trsvcid": "4421" 00:10:22.826 }, 00:10:22.826 "method": "nvmf_subsystem_remove_listener", 00:10:22.826 "req_id": 1 00:10:22.826 } 00:10:22.826 Got JSON-RPC error response 00:10:22.826 response: 00:10:22.826 { 00:10:22.826 "code": -32602, 00:10:22.826 "message": "Invalid parameters" 00:10:22.826 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:22.826 13:56:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19594 -i 0 00:10:23.086 [2024-07-15 13:56:21.026624] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19594: invalid cntlid range [0-65519] 00:10:23.086 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:23.086 { 00:10:23.086 "nqn": "nqn.2016-06.io.spdk:cnode19594", 00:10:23.086 "min_cntlid": 0, 00:10:23.086 "method": "nvmf_create_subsystem", 00:10:23.086 "req_id": 1 00:10:23.086 } 00:10:23.086 Got JSON-RPC error response 00:10:23.086 response: 00:10:23.086 { 00:10:23.086 "code": -32602, 00:10:23.086 "message": "Invalid cntlid range [0-65519]" 00:10:23.086 }' 00:10:23.086 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:23.086 { 00:10:23.086 "nqn": "nqn.2016-06.io.spdk:cnode19594", 00:10:23.086 "min_cntlid": 0, 00:10:23.086 "method": "nvmf_create_subsystem", 00:10:23.086 "req_id": 1 00:10:23.086 } 00:10:23.086 Got JSON-RPC error response 00:10:23.086 response: 00:10:23.086 { 00:10:23.087 "code": -32602, 00:10:23.087 "message": "Invalid cntlid range [0-65519]" 00:10:23.087 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.087 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21356 -i 65520 00:10:23.087 [2024-07-15 13:56:21.195152] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21356: invalid cntlid range [65520-65519] 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:23.347 { 00:10:23.347 "nqn": "nqn.2016-06.io.spdk:cnode21356", 00:10:23.347 "min_cntlid": 65520, 00:10:23.347 "method": "nvmf_create_subsystem", 00:10:23.347 "req_id": 1 00:10:23.347 } 00:10:23.347 Got JSON-RPC error response 00:10:23.347 response: 00:10:23.347 { 00:10:23.347 "code": -32602, 00:10:23.347 "message": "Invalid cntlid range [65520-65519]" 00:10:23.347 }' 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:23.347 { 00:10:23.347 "nqn": "nqn.2016-06.io.spdk:cnode21356", 00:10:23.347 "min_cntlid": 65520, 00:10:23.347 "method": "nvmf_create_subsystem", 00:10:23.347 "req_id": 1 00:10:23.347 } 00:10:23.347 Got JSON-RPC error response 00:10:23.347 response: 00:10:23.347 { 00:10:23.347 "code": -32602, 00:10:23.347 "message": "Invalid cntlid range [65520-65519]" 00:10:23.347 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25618 -I 0 00:10:23.347 [2024-07-15 13:56:21.355676] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25618: invalid cntlid range [1-0] 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:23.347 { 00:10:23.347 "nqn": "nqn.2016-06.io.spdk:cnode25618", 00:10:23.347 "max_cntlid": 0, 00:10:23.347 "method": "nvmf_create_subsystem", 00:10:23.347 "req_id": 1 00:10:23.347 } 00:10:23.347 Got JSON-RPC error response 00:10:23.347 response: 00:10:23.347 { 00:10:23.347 "code": -32602, 00:10:23.347 "message": "Invalid cntlid range [1-0]" 00:10:23.347 }' 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:23.347 { 00:10:23.347 "nqn": "nqn.2016-06.io.spdk:cnode25618", 00:10:23.347 "max_cntlid": 0, 00:10:23.347 "method": "nvmf_create_subsystem", 00:10:23.347 "req_id": 1 00:10:23.347 } 00:10:23.347 Got JSON-RPC error response 00:10:23.347 response: 00:10:23.347 { 00:10:23.347 "code": -32602, 00:10:23.347 "message": "Invalid cntlid range [1-0]" 00:10:23.347 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.347 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18743 -I 65520 00:10:23.607 [2024-07-15 13:56:21.528200] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18743: invalid cntlid range [1-65520] 00:10:23.607 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:23.607 { 00:10:23.607 "nqn": "nqn.2016-06.io.spdk:cnode18743", 00:10:23.607 "max_cntlid": 65520, 00:10:23.607 "method": "nvmf_create_subsystem", 00:10:23.607 "req_id": 1 00:10:23.607 } 00:10:23.607 Got JSON-RPC error response 00:10:23.607 response: 00:10:23.607 { 00:10:23.607 "code": -32602, 00:10:23.607 "message": "Invalid cntlid range [1-65520]" 00:10:23.607 }' 00:10:23.607 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:23.607 { 00:10:23.607 "nqn": "nqn.2016-06.io.spdk:cnode18743", 00:10:23.607 "max_cntlid": 65520, 00:10:23.607 "method": "nvmf_create_subsystem", 00:10:23.607 "req_id": 1 00:10:23.607 } 00:10:23.607 Got JSON-RPC error response 00:10:23.607 response: 00:10:23.607 { 00:10:23.607 "code": -32602, 00:10:23.607 "message": "Invalid cntlid range [1-65520]" 00:10:23.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.607 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16891 -i 6 -I 5 00:10:23.607 [2024-07-15 13:56:21.700748] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16891: invalid cntlid range [6-5] 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:23.867 { 00:10:23.867 "nqn": "nqn.2016-06.io.spdk:cnode16891", 00:10:23.867 "min_cntlid": 6, 00:10:23.867 "max_cntlid": 5, 00:10:23.867 "method": "nvmf_create_subsystem", 00:10:23.867 "req_id": 1 00:10:23.867 } 00:10:23.867 Got JSON-RPC error response 00:10:23.867 response: 00:10:23.867 { 00:10:23.867 "code": -32602, 00:10:23.867 "message": "Invalid cntlid range [6-5]" 00:10:23.867 }' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:23.867 { 00:10:23.867 "nqn": "nqn.2016-06.io.spdk:cnode16891", 00:10:23.867 "min_cntlid": 6, 00:10:23.867 "max_cntlid": 5, 00:10:23.867 "method": "nvmf_create_subsystem", 00:10:23.867 "req_id": 1 00:10:23.867 } 00:10:23.867 Got JSON-RPC error response 00:10:23.867 response: 00:10:23.867 { 00:10:23.867 "code": -32602, 00:10:23.867 "message": "Invalid cntlid range [6-5]" 00:10:23.867 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:23.867 { 00:10:23.867 "name": "foobar", 00:10:23.867 "method": "nvmf_delete_target", 00:10:23.867 "req_id": 1 00:10:23.867 } 00:10:23.867 Got JSON-RPC error response 00:10:23.867 response: 00:10:23.867 { 00:10:23.867 "code": -32602, 00:10:23.867 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:23.867 }' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:23.867 { 00:10:23.867 "name": "foobar", 00:10:23.867 "method": "nvmf_delete_target", 00:10:23.867 "req_id": 1 00:10:23.867 } 00:10:23.867 Got JSON-RPC error response 00:10:23.867 response: 00:10:23.867 { 00:10:23.867 "code": -32602, 00:10:23.867 "message": "The specified target doesn't exist, cannot delete it." 00:10:23.867 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.867 rmmod nvme_tcp 00:10:23.867 rmmod nvme_fabrics 00:10:23.867 rmmod nvme_keyring 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1220020 ']' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1220020 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1220020 ']' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1220020 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220020 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220020' 00:10:23.867 killing process with pid 1220020 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1220020 00:10:23.867 13:56:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1220020 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.126 13:56:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.038 13:56:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.298 00:10:26.298 real 0m14.251s 00:10:26.298 user 0m19.282s 00:10:26.298 sys 0m6.887s 00:10:26.299 13:56:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.299 13:56:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:26.299 ************************************ 00:10:26.299 END TEST nvmf_invalid 00:10:26.299 ************************************ 00:10:26.299 13:56:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:26.299 13:56:24 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:26.299 13:56:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:26.299 13:56:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.299 13:56:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.299 ************************************ 00:10:26.299 START TEST nvmf_abort 00:10:26.299 ************************************ 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:26.299 * Looking for test storage... 00:10:26.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.299 13:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:34.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:34.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:34.436 Found net devices under 0000:31:00.0: cvl_0_0 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.436 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:34.437 Found net devices under 0000:31:00.1: cvl_0_1 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:10:34.437 00:10:34.437 --- 10.0.0.2 ping statistics --- 00:10:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.437 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:34.437 00:10:34.437 --- 10.0.0.1 ping statistics --- 00:10:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.437 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1225703 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1225703 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1225703 ']' 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.437 13:56:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.437 [2024-07-15 13:56:32.434504] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:34.437 [2024-07-15 13:56:32.434552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.437 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.437 [2024-07-15 13:56:32.525732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.697 [2024-07-15 13:56:32.603683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.697 [2024-07-15 13:56:32.603737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.697 [2024-07-15 13:56:32.603745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.697 [2024-07-15 13:56:32.603760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.697 [2024-07-15 13:56:32.603766] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.697 [2024-07-15 13:56:32.603888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.697 [2024-07-15 13:56:32.604173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.697 [2024-07-15 13:56:32.604174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 [2024-07-15 13:56:33.254280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 Malloc0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 Delay0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 [2024-07-15 13:56:33.337689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.267 13:56:33 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:35.527 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.527 [2024-07-15 13:56:33.458431] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:37.437 Initializing NVMe Controllers 00:10:37.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:37.437 controller IO queue size 128 less than required 00:10:37.437 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:37.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:37.437 Initialization complete. Launching workers. 00:10:37.437 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 32418 00:10:37.437 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32481, failed to submit 62 00:10:37.437 success 32422, unsuccess 59, failed 0 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.437 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.697 rmmod nvme_tcp 00:10:37.697 rmmod nvme_fabrics 00:10:37.697 rmmod nvme_keyring 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1225703 ']' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1225703 ']' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225703' 00:10:37.697 killing process with pid 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1225703 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.697 13:56:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.273 13:56:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.273 00:10:40.273 real 0m13.649s 00:10:40.273 user 0m13.511s 00:10:40.273 sys 0m6.849s 00:10:40.273 13:56:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.273 13:56:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.273 ************************************ 00:10:40.273 END TEST nvmf_abort 00:10:40.273 ************************************ 00:10:40.273 13:56:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.273 13:56:37 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:40.273 13:56:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.273 13:56:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.273 13:56:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.273 ************************************ 00:10:40.273 START TEST nvmf_ns_hotplug_stress 00:10:40.273 ************************************ 00:10:40.273 13:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:40.273 * Looking for test storage... 00:10:40.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.273 13:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.406 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:48.407 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:48.407 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:48.407 Found net devices under 0000:31:00.0: cvl_0_0 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:48.407 Found net devices under 0000:31:00.1: cvl_0_1 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:10:48.407 00:10:48.407 --- 10.0.0.2 ping statistics --- 00:10:48.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.407 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:10:48.407 00:10:48.407 --- 10.0.0.1 ping statistics --- 00:10:48.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.407 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1231073 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1231073 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1231073 ']' 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.407 13:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.407 [2024-07-15 13:56:46.421513] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:48.407 [2024-07-15 13:56:46.421564] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.407 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.407 [2024-07-15 13:56:46.513721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.667 [2024-07-15 13:56:46.589423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.667 [2024-07-15 13:56:46.589469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.667 [2024-07-15 13:56:46.589478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.667 [2024-07-15 13:56:46.589484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.667 [2024-07-15 13:56:46.589490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.667 [2024-07-15 13:56:46.589617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.667 [2024-07-15 13:56:46.589794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.667 [2024-07-15 13:56:46.589795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:49.237 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:49.497 [2024-07-15 13:56:47.371526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.497 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.497 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.757 [2024-07-15 13:56:47.712972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.757 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.018 13:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:50.018 Malloc0 00:10:50.018 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:50.279 Delay0 00:10:50.279 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.539 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:50.539 NULL1 00:10:50.539 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:50.799 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1231447 00:10:50.799 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:50.799 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:50.800 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.800 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.060 13:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.060 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:51.060 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:51.321 [2024-07-15 13:56:49.233559] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:51.321 true 00:10:51.321 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:51.321 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.321 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.581 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:51.581 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:51.842 true 00:10:51.842 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:51.842 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.842 13:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.103 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:52.103 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:52.364 true 00:10:52.364 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:52.364 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.364 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.625 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:52.625 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:52.886 true 00:10:52.886 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:52.886 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.886 13:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.147 13:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:53.147 13:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:53.147 true 00:10:53.407 13:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:53.407 13:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.350 Read completed with error (sct=0, sc=11) 00:10:54.350 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.350 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:54.350 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:54.350 true 00:10:54.350 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:54.350 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.611 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.872 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:54.872 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:54.872 true 00:10:54.872 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:54.872 13:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.133 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.394 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:55.394 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:55.394 true 00:10:55.394 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:55.394 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.654 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.915 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:55.915 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:55.915 true 00:10:55.915 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:55.915 13:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.176 13:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.176 13:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:56.176 13:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:56.438 true 00:10:56.438 13:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:56.438 13:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.378 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.378 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:57.378 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:57.638 true 00:10:57.638 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:57.638 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.899 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.899 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:57.899 13:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:58.160 true 00:10:58.160 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:58.160 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.430 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.430 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:58.430 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:58.691 true 00:10:58.691 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:58.691 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.951 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.951 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:58.951 13:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:59.212 true 00:10:59.212 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:59.212 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.212 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.473 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:59.473 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:59.733 true 00:10:59.733 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:10:59.733 13:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.674 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.674 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:00.674 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:00.934 true 00:11:00.934 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:00.934 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.934 13:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.195 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:01.195 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:01.195 true 00:11:01.455 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:01.455 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.455 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.714 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:01.714 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:01.714 true 00:11:01.975 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:01.975 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.975 13:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.236 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:02.236 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:02.236 true 00:11:02.236 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:02.236 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.497 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.757 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:02.757 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:02.757 true 00:11:02.757 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:02.757 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.039 13:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.303 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:03.303 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:03.303 true 00:11:03.303 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:03.303 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.562 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.823 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:03.823 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:03.823 true 00:11:03.823 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:03.823 13:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.084 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.084 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:04.084 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:04.344 true 00:11:04.344 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:04.344 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.603 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.603 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:04.604 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:04.863 true 00:11:04.863 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:04.863 13:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.124 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.124 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:05.124 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:05.385 true 00:11:05.385 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:05.385 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.646 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.646 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:05.646 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:05.906 true 00:11:05.906 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:05.906 13:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.847 13:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.107 13:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:07.107 13:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:07.107 true 00:11:07.107 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:07.107 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.367 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.628 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:07.628 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:07.628 true 00:11:07.628 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:07.628 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.889 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.889 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:07.889 13:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:08.149 true 00:11:08.149 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:08.149 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.409 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.409 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:08.409 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:08.714 true 00:11:08.714 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:08.714 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.986 13:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.986 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:08.986 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:09.254 true 00:11:09.254 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:09.254 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.254 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.515 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:09.515 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:09.775 true 00:11:09.775 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:09.775 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.775 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.035 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:10.035 13:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:10.035 true 00:11:10.295 13:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:10.295 13:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.236 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.236 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:11.236 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:11.236 true 00:11:11.495 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:11.495 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.495 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.754 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:11.754 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:11.754 true 00:11:11.754 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:11.754 13:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.014 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.274 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:12.274 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:12.274 true 00:11:12.274 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:12.274 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.534 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.794 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:12.794 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:12.794 true 00:11:12.794 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:12.794 13:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.054 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.314 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:13.314 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:13.314 true 00:11:13.314 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:13.314 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.575 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.835 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:13.835 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:13.835 true 00:11:13.835 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:13.835 13:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.095 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.355 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:14.355 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:14.355 true 00:11:14.355 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:14.355 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.616 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.616 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:14.616 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:14.878 true 00:11:14.878 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:14.878 13:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 13:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.139 13:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:15.139 13:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:15.399 true 00:11:15.399 13:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:15.399 13:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.340 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.340 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:16.340 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:16.600 true 00:11:16.600 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:16.600 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.861 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.861 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:16.861 13:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:17.121 true 00:11:17.121 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:17.121 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.399 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.399 [2024-07-15 13:57:15.368222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.368996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.399 [2024-07-15 13:57:15.369589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.369994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.370988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.400 [2024-07-15 13:57:15.371570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.371999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.372994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.401 [2024-07-15 13:57:15.373907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.373964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.373990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.374978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.402 [2024-07-15 13:57:15.375199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.402 [2024-07-15 13:57:15.375624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.375991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.376985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.377984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.403 [2024-07-15 13:57:15.378885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.378907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.378935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.378962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.378991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.379999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.380838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.404 [2024-07-15 13:57:15.381920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.381949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.381978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.382989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.383979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.384006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.384034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.384061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.405 [2024-07-15 13:57:15.384087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.384996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.385997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.406 [2024-07-15 13:57:15.386691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.386974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.387991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.388983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.407 [2024-07-15 13:57:15.389010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.389989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.390987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.391990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.408 [2024-07-15 13:57:15.392215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.392973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.393748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.394996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.409 [2024-07-15 13:57:15.395674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.395999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.396987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.397968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.398009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.410 [2024-07-15 13:57:15.398036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.398993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:17.411 [2024-07-15 13:57:15.399516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:17.411 [2024-07-15 13:57:15.399884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.399997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.400985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.411 [2024-07-15 13:57:15.401261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.401984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.402985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.403993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.412 [2024-07-15 13:57:15.404235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.404429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.405996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.406971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.413 [2024-07-15 13:57:15.407880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.407908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.407933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.407960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.407988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.408981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.414 [2024-07-15 13:57:15.409760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.409988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.414 [2024-07-15 13:57:15.410919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.410942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.410965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.410988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.411995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.412981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.413999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.414052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.415 [2024-07-15 13:57:15.414078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.414976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.415639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.416984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.416 [2024-07-15 13:57:15.417664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.417976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.418998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.419974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.420979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.421010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.417 [2024-07-15 13:57:15.421038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.421986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.422989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.418 [2024-07-15 13:57:15.423646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.423987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.424989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.425989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.419 [2024-07-15 13:57:15.426936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.426964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.426988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.427975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.428586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.429995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.430028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.420 [2024-07-15 13:57:15.430054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.430974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.431994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.421 [2024-07-15 13:57:15.432801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.432988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.433995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.434972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.435996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.436025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.436069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.436098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.436132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.422 [2024-07-15 13:57:15.436160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.436997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.437990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.438987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.423 [2024-07-15 13:57:15.439266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.439431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.440989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.441922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.424 [2024-07-15 13:57:15.442319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.442955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.443836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.425 [2024-07-15 13:57:15.444503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.444982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.425 [2024-07-15 13:57:15.445593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.445978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.446977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.447987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.426 [2024-07-15 13:57:15.448715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.448991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.449997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.450992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.427 [2024-07-15 13:57:15.451551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.451979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.452986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.453974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.428 [2024-07-15 13:57:15.454526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.454556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.454582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.454609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.454635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.454997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.455976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.456769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.429 [2024-07-15 13:57:15.457492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.457973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.458994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.459971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.430 [2024-07-15 13:57:15.460669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.460994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.461979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.431 [2024-07-15 13:57:15.462880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.462906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.462935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.462962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.462990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.463987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.464981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.432 [2024-07-15 13:57:15.465911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.465939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.465964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.465990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.466999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.467702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.433 [2024-07-15 13:57:15.468430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.468984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.469997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.470991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.471019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.434 [2024-07-15 13:57:15.471046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.471991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.472974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.435 [2024-07-15 13:57:15.473888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.473911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.473935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.473958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.473981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.474983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.475976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.476997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.436 [2024-07-15 13:57:15.477335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.477989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.478978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.437 [2024-07-15 13:57:15.479207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.479997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.437 [2024-07-15 13:57:15.480412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.480709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.481989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.482866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.438 [2024-07-15 13:57:15.483801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.483973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.484995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.485983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.439 [2024-07-15 13:57:15.486778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.486991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.487978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.488993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.489975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.440 [2024-07-15 13:57:15.490426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.490981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.491705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.492982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.441 [2024-07-15 13:57:15.493434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.493873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.494990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.495986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.442 [2024-07-15 13:57:15.496713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.496976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.731 [2024-07-15 13:57:15.497229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.497976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.498975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.732 [2024-07-15 13:57:15.499769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.499995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.500990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.501974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.733 [2024-07-15 13:57:15.502223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.502711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.503985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.734 [2024-07-15 13:57:15.504718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.504920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.505997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.506980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.735 [2024-07-15 13:57:15.507501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.507973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.508982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.509996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.510020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.510049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.510078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.736 [2024-07-15 13:57:15.510108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.510977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.511976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.737 [2024-07-15 13:57:15.512221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.738 [2024-07-15 13:57:15.512963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.512992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.513988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.738 [2024-07-15 13:57:15.514727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.514976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.515763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.516974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.739 [2024-07-15 13:57:15.517352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.517897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.518993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.740 [2024-07-15 13:57:15.519993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.520977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.521991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.522946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.523003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.523031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.523061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.741 [2024-07-15 13:57:15.523095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.523995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.524987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.525997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.742 [2024-07-15 13:57:15.526170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.526661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.527978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.528979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.529983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.743 [2024-07-15 13:57:15.530238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.530983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.531988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.532982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.533986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.744 [2024-07-15 13:57:15.534515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.534999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 true 00:11:17.745 [2024-07-15 13:57:15.535022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.535969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.536965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.537750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.745 [2024-07-15 13:57:15.538792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.538998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.539922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.540852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.540884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.540909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.540954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.540981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.541999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.746 [2024-07-15 13:57:15.542922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.542950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.542976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.543971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.544681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.545995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.747 [2024-07-15 13:57:15.546828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.547982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 Message suppressed 999 times: [2024-07-15 13:57:15.548036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 Read completed with error (sct=0, sc=15) 00:11:17.748 [2024-07-15 13:57:15.548068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.548979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.549981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.550974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.551002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.551029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.551064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.748 [2024-07-15 13:57:15.551091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.551996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.552978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.553982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.554989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.749 [2024-07-15 13:57:15.555262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.555975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.556979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.557933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.558990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.559017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.559044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.559071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.750 [2024-07-15 13:57:15.559101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.559871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:17.751 [2024-07-15 13:57:15.560347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.751 [2024-07-15 13:57:15.560728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.560995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.561999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.562995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.751 [2024-07-15 13:57:15.563521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.563978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.564992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.565975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.566457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.752 [2024-07-15 13:57:15.567893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.567918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.567946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.567970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.567999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.568952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.569979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.570833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.753 [2024-07-15 13:57:15.571393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.571977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.572983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.573979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.574979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.754 [2024-07-15 13:57:15.575558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.575994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.576991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.577325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.578982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.579909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.580049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.755 [2024-07-15 13:57:15.580077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.580991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.581869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.756 [2024-07-15 13:57:15.582511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.582986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.756 [2024-07-15 13:57:15.583936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.583959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.584984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.585965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.586995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.757 [2024-07-15 13:57:15.587472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.587989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.588998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.589987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.590988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.758 [2024-07-15 13:57:15.591318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.591992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.592977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.593974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.594988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.759 [2024-07-15 13:57:15.595544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.595976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.596967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.597984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.598969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.760 [2024-07-15 13:57:15.599766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.599981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.600966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.601997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.602972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.761 [2024-07-15 13:57:15.603358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.603994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.604816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.605986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.606949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.762 [2024-07-15 13:57:15.607482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.607984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.608982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.609937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.610996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.763 [2024-07-15 13:57:15.611640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.611997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.612993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.613986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.614979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.615007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.615036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.615062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.615088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.764 [2024-07-15 13:57:15.615115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 Message suppressed 999 times: [2024-07-15 13:57:15.615861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 Read completed with error (sct=0, sc=15) 00:11:17.765 [2024-07-15 13:57:15.615893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.615978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.616978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.617995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.618985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.765 [2024-07-15 13:57:15.619238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.619852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.620976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.621917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.622962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.766 [2024-07-15 13:57:15.623557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.623994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.624994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.625978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.626995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.767 [2024-07-15 13:57:15.627690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.627999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.768 [2024-07-15 13:57:15.628905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.628930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.628956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.628982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.629992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.630976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.769 [2024-07-15 13:57:15.631577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.631989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.632984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.633572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.634986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.770 [2024-07-15 13:57:15.635823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.635854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.635886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.635913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.635950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.635978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.636981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.637977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.638980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.771 [2024-07-15 13:57:15.639872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.639898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.639925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.639952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.639985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.640975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.641986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.772 [2024-07-15 13:57:15.642803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.642999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.643659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.644996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.645844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.773 [2024-07-15 13:57:15.646556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.646991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.647993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.648963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.774 [2024-07-15 13:57:15.648989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.649981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.774 [2024-07-15 13:57:15.650728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.650986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.651990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.652975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.653995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.654985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.655021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.655049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.775 [2024-07-15 13:57:15.655078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.655989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.656734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.657986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.776 [2024-07-15 13:57:15.658562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.658960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.659992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.660974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.661980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.662993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.777 [2024-07-15 13:57:15.663233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.663988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.664975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.665655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.666986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.667890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.778 [2024-07-15 13:57:15.668492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.668978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.669978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.670987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.671999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.672992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.779 [2024-07-15 13:57:15.673247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.673996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.674416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.675993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.676897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.780 [2024-07-15 13:57:15.677888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.677917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.677945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.677974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.678850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.679976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.680980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.681993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.682999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.683025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.683051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.781 [2024-07-15 13:57:15.683075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.683978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.782 [2024-07-15 13:57:15.684327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.684985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.685991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.686978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.687819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.782 [2024-07-15 13:57:15.688308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.688989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.689809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.690981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.691960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.783 [2024-07-15 13:57:15.692558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.692991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.693991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 [2024-07-15 13:57:15.694190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.784 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.074 [2024-07-15 13:57:15.860327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.860981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.861981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.862994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.074 [2024-07-15 13:57:15.863272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.863998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.864975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.865997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.866744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.867234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.075 [2024-07-15 13:57:15.867262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.867993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.868979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.869957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.076 [2024-07-15 13:57:15.870660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.870982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.871976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.872966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.873973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.077 [2024-07-15 13:57:15.874229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.874975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.875994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.078 [2024-07-15 13:57:15.876927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.876953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.876976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.876998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.877999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.878997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.879989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.079 [2024-07-15 13:57:15.880457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.880987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.881986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.882984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.080 [2024-07-15 13:57:15.883846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.883875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.883900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.883925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.883952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.883981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.884999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.885999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.886990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.081 [2024-07-15 13:57:15.887100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.081 [2024-07-15 13:57:15.887480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.887983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.888821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:18.082 [2024-07-15 13:57:15.889243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 13:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:18.082 [2024-07-15 13:57:15.889303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.889974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.082 [2024-07-15 13:57:15.890574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.890979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.891985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.892979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.083 [2024-07-15 13:57:15.893924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.893949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.893975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.894999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.895993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.896806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.084 [2024-07-15 13:57:15.897665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.897989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.898915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.899979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.900993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.085 [2024-07-15 13:57:15.901584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.901998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.902977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.903980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.086 [2024-07-15 13:57:15.904664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.904993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.905804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.906999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.087 [2024-07-15 13:57:15.907627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.907984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.908991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.909997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.910190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.088 [2024-07-15 13:57:15.911597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.911991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.912881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.913995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.089 [2024-07-15 13:57:15.914403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.914808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.915985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.916871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.090 [2024-07-15 13:57:15.917757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.917996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.918987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.091 [2024-07-15 13:57:15.919652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.919997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.920996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 Message suppressed 999 times: [2024-07-15 13:57:15.921855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 Read completed with error (sct=0, sc=15) 00:11:18.092 [2024-07-15 13:57:15.921885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.921998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.092 [2024-07-15 13:57:15.922836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.922996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.923948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.924997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.925977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.093 [2024-07-15 13:57:15.926150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.926990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.927913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.928999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.094 [2024-07-15 13:57:15.929262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.929995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.930985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.931995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.095 [2024-07-15 13:57:15.932192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.932994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.933807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.096 [2024-07-15 13:57:15.934842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.934868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.934895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.934918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.934947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.934974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.935970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.936982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.097 [2024-07-15 13:57:15.937847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.937999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.938972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.939972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.940973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.098 [2024-07-15 13:57:15.941172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.941990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.942984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.943980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.944002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.944027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.099 [2024-07-15 13:57:15.944051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.944700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.945979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.946991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.100 [2024-07-15 13:57:15.947540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.947992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.948973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.101 [2024-07-15 13:57:15.949891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.949917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.949942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.949970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.950990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.951998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.952970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.102 [2024-07-15 13:57:15.953226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.953991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.954978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.955976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.103 [2024-07-15 13:57:15.956513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.103 [2024-07-15 13:57:15.956572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.956815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.957990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.958988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.959999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.104 [2024-07-15 13:57:15.960548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.960973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.961990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.962978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.105 [2024-07-15 13:57:15.963848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.963878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.963905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.963936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.963962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.963989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.964989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.965989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.966998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.967026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.106 [2024-07-15 13:57:15.967057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.967973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.968717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.969988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.107 [2024-07-15 13:57:15.970406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.970902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.971974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.972990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.973994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.108 [2024-07-15 13:57:15.974172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.974998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.975985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.976983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.109 [2024-07-15 13:57:15.977886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.977917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.977944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.977987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.978973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.979674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.110 [2024-07-15 13:57:15.980972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.981981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.982986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.983974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.111 [2024-07-15 13:57:15.984552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.984998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.985985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.986986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.112 [2024-07-15 13:57:15.987845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.987872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.987899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.987925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.987953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.987992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.988483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.989993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.990930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.113 [2024-07-15 13:57:15.991256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.113 [2024-07-15 13:57:15.991786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.991995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.992969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.993743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.994982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.114 [2024-07-15 13:57:15.995006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.995986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.996996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.997997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.115 [2024-07-15 13:57:15.998764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.998976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:15.999980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.000978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.001985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.116 [2024-07-15 13:57:16.002162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.002982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.003994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.117 [2024-07-15 13:57:16.004482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.004784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.005978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.006955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.007987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.118 [2024-07-15 13:57:16.008185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.008977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.009970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.010975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.119 [2024-07-15 13:57:16.011498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.011977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.012988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.013910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.014974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.120 [2024-07-15 13:57:16.015491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.015980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.016971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.017989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.018970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.019001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.019059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.121 [2024-07-15 13:57:16.019089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.019993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.020717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.021998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.122 [2024-07-15 13:57:16.022781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.022977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.023987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.024972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.123 [2024-07-15 13:57:16.025346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.025972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.124 [2024-07-15 13:57:16.026066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 true 00:11:18.124 [2024-07-15 13:57:16.026853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.026997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.027972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.124 [2024-07-15 13:57:16.028883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.028914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.028942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.028971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.029972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.030993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.031728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.125 [2024-07-15 13:57:16.032542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.032982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.033932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.034994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.126 [2024-07-15 13:57:16.035945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.035972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.036977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.037997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.038944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.039209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.039239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.039277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.127 [2024-07-15 13:57:16.039306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.039986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.040973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.041986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.128 [2024-07-15 13:57:16.042567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.042596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.043978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.044995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.045973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.129 [2024-07-15 13:57:16.046435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.046977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.047994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.048982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.049992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.130 [2024-07-15 13:57:16.050243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.050986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.051736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:18.131 [2024-07-15 13:57:16.052182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.131 [2024-07-15 13:57:16.052359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.052984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.131 [2024-07-15 13:57:16.053595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.132 [2024-07-15 13:57:16.053926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.053950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.053973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.053997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.054990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.055980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.056983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.133 [2024-07-15 13:57:16.057497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.057989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.058300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.059990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.060985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.061014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.061040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.061066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.061095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.134 [2024-07-15 13:57:16.061125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.135 [2024-07-15 13:57:16.061735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.061998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.062834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.063992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.135 [2024-07-15 13:57:16.064701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.064984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.065986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.066978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.067981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.136 [2024-07-15 13:57:16.068205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.068983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.069976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.070976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.071009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.071041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.137 [2024-07-15 13:57:16.071073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.071892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.072983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.073848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.138 [2024-07-15 13:57:16.074347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.074997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.075995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.076999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.077991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.139 [2024-07-15 13:57:16.078017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.078977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.079999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.080596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.140 [2024-07-15 13:57:16.081679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.081970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.082995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.083965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.141 [2024-07-15 13:57:16.084548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.084833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.085983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.086980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.087988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.142 [2024-07-15 13:57:16.088306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.088988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.089971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.090999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.091997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.143 [2024-07-15 13:57:16.092025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.092983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.093689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.094984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.144 [2024-07-15 13:57:16.095428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.095905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.145 [2024-07-15 13:57:16.096598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.096974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.097987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.098976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.099004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.099033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.099062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.099092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.145 [2024-07-15 13:57:16.099122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.099982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.100978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.101998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.102028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.102056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.146 [2024-07-15 13:57:16.102084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.102988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.103997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.104780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.105977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.147 [2024-07-15 13:57:16.106199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.106996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.107967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.108985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.148 [2024-07-15 13:57:16.109392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.109998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.110993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.111566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.112990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.149 [2024-07-15 13:57:16.113277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.113991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.114988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.115999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.116031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.150 [2024-07-15 13:57:16.116399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.116982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.117991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.118978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.151 [2024-07-15 13:57:16.119769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.119977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.120982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.121996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.122970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.152 [2024-07-15 13:57:16.123458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.123992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.124764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.125976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.153 [2024-07-15 13:57:16.126865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.126892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.126934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.126963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.126990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.127976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.128974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.129974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.130002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.130029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.154 [2024-07-15 13:57:16.130057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.155 [2024-07-15 13:57:16.130126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.130989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.131975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.132989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.155 [2024-07-15 13:57:16.133550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.133996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.134974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.135741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.136996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.156 [2024-07-15 13:57:16.137435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.137969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.138988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.139964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.140971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.157 [2024-07-15 13:57:16.141167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.141973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.142673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.143975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.158 [2024-07-15 13:57:16.144536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.144982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.145990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.146993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.147990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.159 [2024-07-15 13:57:16.148473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.148994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.149829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.150971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.160 [2024-07-15 13:57:16.151688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.151932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.152976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.153969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.161 [2024-07-15 13:57:16.154806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.154999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.155992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.156982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.157983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.162 [2024-07-15 13:57:16.158153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.158977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.163 [2024-07-15 13:57:16.159302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.159983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.160824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.428 [2024-07-15 13:57:16.161303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.161976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.162971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.163973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.429 [2024-07-15 13:57:16.164669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.164997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.165974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.430 [2024-07-15 13:57:16.166059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.166988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.167970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.430 [2024-07-15 13:57:16.168395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.168976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.169690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.170986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.171773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.431 [2024-07-15 13:57:16.172199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.172995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.173999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.174988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.175015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.175045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.175074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.175102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.432 [2024-07-15 13:57:16.175133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.175970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.176970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.177987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.178970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.179000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.433 [2024-07-15 13:57:16.179029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.179988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.180737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.181991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.434 [2024-07-15 13:57:16.182563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.182972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.183004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.183151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.183181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 [2024-07-15 13:57:16.183212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.435 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.435 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:18.435 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:18.435 true 00:11:18.696 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:18.696 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.696 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.960 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:18.960 13:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:18.960 true 00:11:18.960 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:18.960 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.246 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.507 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:19.507 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:19.507 true 00:11:19.507 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:19.507 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.767 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.028 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:20.028 13:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:20.028 true 00:11:20.028 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:20.028 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.290 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.290 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:20.290 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:20.551 true 00:11:20.551 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:20.551 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.812 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.812 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:11:20.812 13:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:21.072 true 00:11:21.072 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:21.072 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.072 Initializing NVMe Controllers 00:11:21.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.072 Controller IO queue size 128, less than required. 00:11:21.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.072 Controller IO queue size 128, less than required. 00:11:21.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:21.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:21.072 Initialization complete. Launching workers. 00:11:21.072 ======================================================== 00:11:21.072 Latency(us) 00:11:21.072 Device Information : IOPS MiB/s Average min max 00:11:21.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1215.90 0.59 26954.96 2351.65 1095540.99 00:11:21.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7734.94 3.78 16493.43 1683.62 494031.03 00:11:21.072 ======================================================== 00:11:21.072 Total : 8950.84 4.37 17914.55 1683.62 1095540.99 00:11:21.072 00:11:21.333 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.333 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:11:21.333 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:21.593 true 00:11:21.593 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1231447 00:11:21.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1231447) - No such process 00:11:21.593 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1231447 00:11:21.593 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.593 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.852 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:21.852 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:21.852 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:21.852 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:21.852 13:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:22.113 null0 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:22.113 null1 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.113 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:22.373 null2 00:11:22.373 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.373 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.373 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:22.634 null3 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:22.634 null4 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.634 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:22.894 null5 00:11:22.894 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:22.894 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:22.894 13:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:22.894 null6 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:23.154 null7 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.154 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1238532 1238533 1238535 1238538 1238541 1238543 1238546 1238548 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.155 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.414 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:23.676 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:23.936 13:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:23.936 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.936 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.196 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.455 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.456 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:24.456 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.456 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.456 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.456 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.715 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.975 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.976 13:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:24.976 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:25.234 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.235 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.494 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.754 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.013 13:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.013 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:26.272 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.273 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.273 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.531 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.791 rmmod nvme_tcp 00:11:26.791 rmmod nvme_fabrics 00:11:26.791 rmmod nvme_keyring 00:11:26.791 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1231073 ']' 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1231073 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1231073 ']' 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1231073 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1231073 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1231073' 00:11:27.052 killing process with pid 1231073 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1231073 00:11:27.052 13:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1231073 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.052 13:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.599 13:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.599 00:11:29.599 real 0m49.207s 00:11:29.599 user 3m13.721s 00:11:29.599 sys 0m16.304s 00:11:29.599 13:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.599 13:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.599 ************************************ 00:11:29.599 END TEST nvmf_ns_hotplug_stress 00:11:29.599 ************************************ 00:11:29.599 13:57:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.599 13:57:27 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:29.599 13:57:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.599 13:57:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.599 13:57:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.599 ************************************ 00:11:29.599 START TEST nvmf_connect_stress 00:11:29.599 ************************************ 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:29.599 * Looking for test storage... 00:11:29.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.599 13:57:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.600 13:57:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:37.742 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:37.742 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:37.742 Found net devices under 0000:31:00.0: cvl_0_0 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.742 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:37.743 Found net devices under 0000:31:00.1: cvl_0_1 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:11:37.743 00:11:37.743 --- 10.0.0.2 ping statistics --- 00:11:37.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.743 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:11:37.743 00:11:37.743 --- 10.0.0.1 ping statistics --- 00:11:37.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.743 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1244303 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1244303 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1244303 ']' 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.743 13:57:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.743 [2024-07-15 13:57:35.510978] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:37.743 [2024-07-15 13:57:35.511036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.743 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.743 [2024-07-15 13:57:35.606064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.743 [2024-07-15 13:57:35.681111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.743 [2024-07-15 13:57:35.681162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.743 [2024-07-15 13:57:35.681170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.743 [2024-07-15 13:57:35.681177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.743 [2024-07-15 13:57:35.681183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.743 [2024-07-15 13:57:35.681306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.743 [2024-07-15 13:57:35.681435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.743 [2024-07-15 13:57:35.681435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.313 [2024-07-15 13:57:36.342354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.313 [2024-07-15 13:57:36.373907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.313 NULL1 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1244387 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.313 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.313 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.573 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.833 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:38.833 13:57:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.833 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.833 13:57:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.093 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.093 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:39.093 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.093 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.093 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.353 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.613 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:39.613 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.613 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.613 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.872 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.872 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:39.872 13:57:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.872 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.872 13:57:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.132 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.132 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:40.132 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.132 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.132 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.393 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.393 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:40.393 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.393 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.393 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.653 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.912 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:40.912 13:57:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.912 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.912 13:57:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.182 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.182 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:41.182 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.182 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.182 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.442 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.442 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:41.442 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.442 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.442 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.701 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.701 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:41.701 13:57:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.701 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.701 13:57:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.962 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.962 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:41.962 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.962 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.962 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.533 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:42.533 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.533 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.533 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.794 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.794 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:42.794 13:57:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.794 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.794 13:57:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.054 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.054 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:43.054 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.054 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.054 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.315 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:43.315 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.315 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.315 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.886 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.886 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:43.886 13:57:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.886 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.886 13:57:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.147 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.147 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:44.147 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.147 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.147 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.408 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:44.408 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.408 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.408 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.668 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.668 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:44.668 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.668 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.668 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.929 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.929 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:44.929 13:57:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.929 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.929 13:57:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.501 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.501 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:45.501 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.501 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.501 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.762 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.762 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:45.762 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.762 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.762 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.022 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.022 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:46.022 13:57:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.022 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.022 13:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.283 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.283 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:46.283 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.283 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.283 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.544 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.544 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:46.544 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.544 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.544 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.115 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.115 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:47.115 13:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.115 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.115 13:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.376 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.376 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:47.376 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.376 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.376 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.637 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.637 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:47.637 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.637 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.637 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.897 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.897 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:47.897 13:57:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.897 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.897 13:57:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.158 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.158 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:48.158 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.158 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.158 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.419 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1244387 00:11:48.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1244387) - No such process 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1244387 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.679 rmmod nvme_tcp 00:11:48.679 rmmod nvme_fabrics 00:11:48.679 rmmod nvme_keyring 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1244303 ']' 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1244303 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1244303 ']' 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1244303 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1244303 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1244303' 00:11:48.679 killing process with pid 1244303 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1244303 00:11:48.679 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1244303 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.940 13:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.855 13:57:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.855 00:11:50.855 real 0m21.667s 00:11:50.855 user 0m42.464s 00:11:50.855 sys 0m9.135s 00:11:50.855 13:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.855 13:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.855 ************************************ 00:11:50.855 END TEST nvmf_connect_stress 00:11:50.855 ************************************ 00:11:50.855 13:57:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.855 13:57:48 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:50.855 13:57:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.855 13:57:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.855 13:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 ************************************ 00:11:51.116 START TEST nvmf_fused_ordering 00:11:51.116 ************************************ 00:11:51.116 13:57:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:51.116 * Looking for test storage... 00:11:51.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.116 13:57:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:59.309 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:59.309 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:59.309 Found net devices under 0000:31:00.0: cvl_0_0 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:59.309 Found net devices under 0000:31:00.1: cvl_0_1 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.309 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.310 13:57:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:11:59.310 00:11:59.310 --- 10.0.0.2 ping statistics --- 00:11:59.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.310 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:59.310 00:11:59.310 --- 10.0.0.1 ping statistics --- 00:11:59.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.310 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1251093 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1251093 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1251093 ']' 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.310 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.310 [2024-07-15 13:57:57.186290] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:59.310 [2024-07-15 13:57:57.186342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.310 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.310 [2024-07-15 13:57:57.278821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.310 [2024-07-15 13:57:57.370256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.310 [2024-07-15 13:57:57.370315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.310 [2024-07-15 13:57:57.370323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.310 [2024-07-15 13:57:57.370331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.310 [2024-07-15 13:57:57.370337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.310 [2024-07-15 13:57:57.370371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.883 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.883 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:59.883 13:57:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.883 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:59.883 13:57:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 [2024-07-15 13:57:58.011212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 [2024-07-15 13:57:58.027438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 NULL1 00:12:00.144 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.145 13:57:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:00.145 [2024-07-15 13:57:58.085204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:00.145 [2024-07-15 13:57:58.085278] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251440 ] 00:12:00.145 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.405 Attached to nqn.2016-06.io.spdk:cnode1 00:12:00.405 Namespace ID: 1 size: 1GB 00:12:00.405 fused_ordering(0) 00:12:00.405 fused_ordering(1) 00:12:00.405 fused_ordering(2) 00:12:00.405 fused_ordering(3) 00:12:00.405 fused_ordering(4) 00:12:00.405 fused_ordering(5) 00:12:00.405 fused_ordering(6) 00:12:00.405 fused_ordering(7) 00:12:00.405 fused_ordering(8) 00:12:00.405 fused_ordering(9) 00:12:00.405 fused_ordering(10) 00:12:00.405 fused_ordering(11) 00:12:00.405 fused_ordering(12) 00:12:00.405 fused_ordering(13) 00:12:00.405 fused_ordering(14) 00:12:00.405 fused_ordering(15) 00:12:00.406 fused_ordering(16) 00:12:00.406 fused_ordering(17) 00:12:00.406 fused_ordering(18) 00:12:00.406 fused_ordering(19) 00:12:00.406 fused_ordering(20) 00:12:00.406 fused_ordering(21) 00:12:00.406 fused_ordering(22) 00:12:00.406 fused_ordering(23) 00:12:00.406 fused_ordering(24) 00:12:00.406 fused_ordering(25) 00:12:00.406 fused_ordering(26) 00:12:00.406 fused_ordering(27) 00:12:00.406 fused_ordering(28) 00:12:00.406 fused_ordering(29) 00:12:00.406 fused_ordering(30) 00:12:00.406 fused_ordering(31) 00:12:00.406 fused_ordering(32) 00:12:00.406 fused_ordering(33) 00:12:00.406 fused_ordering(34) 00:12:00.406 fused_ordering(35) 00:12:00.406 fused_ordering(36) 00:12:00.406 fused_ordering(37) 00:12:00.406 fused_ordering(38) 00:12:00.406 fused_ordering(39) 00:12:00.406 fused_ordering(40) 00:12:00.406 fused_ordering(41) 00:12:00.406 fused_ordering(42) 00:12:00.406 fused_ordering(43) 00:12:00.406 fused_ordering(44) 00:12:00.406 fused_ordering(45) 00:12:00.406 fused_ordering(46) 00:12:00.406 fused_ordering(47) 00:12:00.406 fused_ordering(48) 00:12:00.406 fused_ordering(49) 00:12:00.406 fused_ordering(50) 00:12:00.406 fused_ordering(51) 00:12:00.406 fused_ordering(52) 00:12:00.406 fused_ordering(53) 00:12:00.406 fused_ordering(54) 00:12:00.406 fused_ordering(55) 00:12:00.406 fused_ordering(56) 00:12:00.406 fused_ordering(57) 00:12:00.406 fused_ordering(58) 00:12:00.406 fused_ordering(59) 00:12:00.406 fused_ordering(60) 00:12:00.406 fused_ordering(61) 00:12:00.406 fused_ordering(62) 00:12:00.406 fused_ordering(63) 00:12:00.406 fused_ordering(64) 00:12:00.406 fused_ordering(65) 00:12:00.406 fused_ordering(66) 00:12:00.406 fused_ordering(67) 00:12:00.406 fused_ordering(68) 00:12:00.406 fused_ordering(69) 00:12:00.406 fused_ordering(70) 00:12:00.406 fused_ordering(71) 00:12:00.406 fused_ordering(72) 00:12:00.406 fused_ordering(73) 00:12:00.406 fused_ordering(74) 00:12:00.406 fused_ordering(75) 00:12:00.406 fused_ordering(76) 00:12:00.406 fused_ordering(77) 00:12:00.406 fused_ordering(78) 00:12:00.406 fused_ordering(79) 00:12:00.406 fused_ordering(80) 00:12:00.406 fused_ordering(81) 00:12:00.406 fused_ordering(82) 00:12:00.406 fused_ordering(83) 00:12:00.406 fused_ordering(84) 00:12:00.406 fused_ordering(85) 00:12:00.406 fused_ordering(86) 00:12:00.406 fused_ordering(87) 00:12:00.406 fused_ordering(88) 00:12:00.406 fused_ordering(89) 00:12:00.406 fused_ordering(90) 00:12:00.406 fused_ordering(91) 00:12:00.406 fused_ordering(92) 00:12:00.406 fused_ordering(93) 00:12:00.406 fused_ordering(94) 00:12:00.406 fused_ordering(95) 00:12:00.406 fused_ordering(96) 00:12:00.406 fused_ordering(97) 00:12:00.406 fused_ordering(98) 00:12:00.406 fused_ordering(99) 00:12:00.406 fused_ordering(100) 00:12:00.406 fused_ordering(101) 00:12:00.406 fused_ordering(102) 00:12:00.406 fused_ordering(103) 00:12:00.406 fused_ordering(104) 00:12:00.406 fused_ordering(105) 00:12:00.406 fused_ordering(106) 00:12:00.406 fused_ordering(107) 00:12:00.406 fused_ordering(108) 00:12:00.406 fused_ordering(109) 00:12:00.406 fused_ordering(110) 00:12:00.406 fused_ordering(111) 00:12:00.406 fused_ordering(112) 00:12:00.406 fused_ordering(113) 00:12:00.406 fused_ordering(114) 00:12:00.406 fused_ordering(115) 00:12:00.406 fused_ordering(116) 00:12:00.406 fused_ordering(117) 00:12:00.406 fused_ordering(118) 00:12:00.406 fused_ordering(119) 00:12:00.406 fused_ordering(120) 00:12:00.406 fused_ordering(121) 00:12:00.406 fused_ordering(122) 00:12:00.406 fused_ordering(123) 00:12:00.406 fused_ordering(124) 00:12:00.406 fused_ordering(125) 00:12:00.406 fused_ordering(126) 00:12:00.406 fused_ordering(127) 00:12:00.406 fused_ordering(128) 00:12:00.406 fused_ordering(129) 00:12:00.406 fused_ordering(130) 00:12:00.406 fused_ordering(131) 00:12:00.406 fused_ordering(132) 00:12:00.406 fused_ordering(133) 00:12:00.406 fused_ordering(134) 00:12:00.406 fused_ordering(135) 00:12:00.406 fused_ordering(136) 00:12:00.406 fused_ordering(137) 00:12:00.406 fused_ordering(138) 00:12:00.406 fused_ordering(139) 00:12:00.406 fused_ordering(140) 00:12:00.406 fused_ordering(141) 00:12:00.406 fused_ordering(142) 00:12:00.406 fused_ordering(143) 00:12:00.406 fused_ordering(144) 00:12:00.406 fused_ordering(145) 00:12:00.406 fused_ordering(146) 00:12:00.406 fused_ordering(147) 00:12:00.406 fused_ordering(148) 00:12:00.406 fused_ordering(149) 00:12:00.406 fused_ordering(150) 00:12:00.406 fused_ordering(151) 00:12:00.406 fused_ordering(152) 00:12:00.406 fused_ordering(153) 00:12:00.406 fused_ordering(154) 00:12:00.406 fused_ordering(155) 00:12:00.406 fused_ordering(156) 00:12:00.406 fused_ordering(157) 00:12:00.406 fused_ordering(158) 00:12:00.406 fused_ordering(159) 00:12:00.406 fused_ordering(160) 00:12:00.406 fused_ordering(161) 00:12:00.406 fused_ordering(162) 00:12:00.406 fused_ordering(163) 00:12:00.406 fused_ordering(164) 00:12:00.406 fused_ordering(165) 00:12:00.406 fused_ordering(166) 00:12:00.406 fused_ordering(167) 00:12:00.406 fused_ordering(168) 00:12:00.406 fused_ordering(169) 00:12:00.406 fused_ordering(170) 00:12:00.406 fused_ordering(171) 00:12:00.406 fused_ordering(172) 00:12:00.406 fused_ordering(173) 00:12:00.406 fused_ordering(174) 00:12:00.406 fused_ordering(175) 00:12:00.406 fused_ordering(176) 00:12:00.406 fused_ordering(177) 00:12:00.406 fused_ordering(178) 00:12:00.406 fused_ordering(179) 00:12:00.406 fused_ordering(180) 00:12:00.406 fused_ordering(181) 00:12:00.406 fused_ordering(182) 00:12:00.406 fused_ordering(183) 00:12:00.406 fused_ordering(184) 00:12:00.406 fused_ordering(185) 00:12:00.406 fused_ordering(186) 00:12:00.406 fused_ordering(187) 00:12:00.406 fused_ordering(188) 00:12:00.406 fused_ordering(189) 00:12:00.406 fused_ordering(190) 00:12:00.406 fused_ordering(191) 00:12:00.406 fused_ordering(192) 00:12:00.406 fused_ordering(193) 00:12:00.406 fused_ordering(194) 00:12:00.406 fused_ordering(195) 00:12:00.406 fused_ordering(196) 00:12:00.406 fused_ordering(197) 00:12:00.406 fused_ordering(198) 00:12:00.406 fused_ordering(199) 00:12:00.406 fused_ordering(200) 00:12:00.406 fused_ordering(201) 00:12:00.406 fused_ordering(202) 00:12:00.406 fused_ordering(203) 00:12:00.406 fused_ordering(204) 00:12:00.406 fused_ordering(205) 00:12:00.978 fused_ordering(206) 00:12:00.978 fused_ordering(207) 00:12:00.978 fused_ordering(208) 00:12:00.978 fused_ordering(209) 00:12:00.978 fused_ordering(210) 00:12:00.978 fused_ordering(211) 00:12:00.978 fused_ordering(212) 00:12:00.978 fused_ordering(213) 00:12:00.978 fused_ordering(214) 00:12:00.978 fused_ordering(215) 00:12:00.978 fused_ordering(216) 00:12:00.978 fused_ordering(217) 00:12:00.978 fused_ordering(218) 00:12:00.978 fused_ordering(219) 00:12:00.978 fused_ordering(220) 00:12:00.978 fused_ordering(221) 00:12:00.978 fused_ordering(222) 00:12:00.978 fused_ordering(223) 00:12:00.978 fused_ordering(224) 00:12:00.978 fused_ordering(225) 00:12:00.978 fused_ordering(226) 00:12:00.978 fused_ordering(227) 00:12:00.978 fused_ordering(228) 00:12:00.978 fused_ordering(229) 00:12:00.978 fused_ordering(230) 00:12:00.978 fused_ordering(231) 00:12:00.978 fused_ordering(232) 00:12:00.978 fused_ordering(233) 00:12:00.978 fused_ordering(234) 00:12:00.978 fused_ordering(235) 00:12:00.978 fused_ordering(236) 00:12:00.978 fused_ordering(237) 00:12:00.978 fused_ordering(238) 00:12:00.978 fused_ordering(239) 00:12:00.978 fused_ordering(240) 00:12:00.978 fused_ordering(241) 00:12:00.978 fused_ordering(242) 00:12:00.978 fused_ordering(243) 00:12:00.978 fused_ordering(244) 00:12:00.978 fused_ordering(245) 00:12:00.978 fused_ordering(246) 00:12:00.978 fused_ordering(247) 00:12:00.978 fused_ordering(248) 00:12:00.978 fused_ordering(249) 00:12:00.978 fused_ordering(250) 00:12:00.978 fused_ordering(251) 00:12:00.978 fused_ordering(252) 00:12:00.978 fused_ordering(253) 00:12:00.978 fused_ordering(254) 00:12:00.978 fused_ordering(255) 00:12:00.978 fused_ordering(256) 00:12:00.978 fused_ordering(257) 00:12:00.978 fused_ordering(258) 00:12:00.978 fused_ordering(259) 00:12:00.978 fused_ordering(260) 00:12:00.978 fused_ordering(261) 00:12:00.978 fused_ordering(262) 00:12:00.978 fused_ordering(263) 00:12:00.978 fused_ordering(264) 00:12:00.978 fused_ordering(265) 00:12:00.978 fused_ordering(266) 00:12:00.978 fused_ordering(267) 00:12:00.978 fused_ordering(268) 00:12:00.978 fused_ordering(269) 00:12:00.978 fused_ordering(270) 00:12:00.978 fused_ordering(271) 00:12:00.978 fused_ordering(272) 00:12:00.978 fused_ordering(273) 00:12:00.978 fused_ordering(274) 00:12:00.978 fused_ordering(275) 00:12:00.978 fused_ordering(276) 00:12:00.978 fused_ordering(277) 00:12:00.978 fused_ordering(278) 00:12:00.978 fused_ordering(279) 00:12:00.978 fused_ordering(280) 00:12:00.978 fused_ordering(281) 00:12:00.978 fused_ordering(282) 00:12:00.978 fused_ordering(283) 00:12:00.978 fused_ordering(284) 00:12:00.978 fused_ordering(285) 00:12:00.978 fused_ordering(286) 00:12:00.978 fused_ordering(287) 00:12:00.978 fused_ordering(288) 00:12:00.978 fused_ordering(289) 00:12:00.978 fused_ordering(290) 00:12:00.978 fused_ordering(291) 00:12:00.978 fused_ordering(292) 00:12:00.978 fused_ordering(293) 00:12:00.978 fused_ordering(294) 00:12:00.978 fused_ordering(295) 00:12:00.978 fused_ordering(296) 00:12:00.978 fused_ordering(297) 00:12:00.978 fused_ordering(298) 00:12:00.978 fused_ordering(299) 00:12:00.978 fused_ordering(300) 00:12:00.978 fused_ordering(301) 00:12:00.978 fused_ordering(302) 00:12:00.978 fused_ordering(303) 00:12:00.978 fused_ordering(304) 00:12:00.978 fused_ordering(305) 00:12:00.978 fused_ordering(306) 00:12:00.978 fused_ordering(307) 00:12:00.978 fused_ordering(308) 00:12:00.978 fused_ordering(309) 00:12:00.978 fused_ordering(310) 00:12:00.978 fused_ordering(311) 00:12:00.978 fused_ordering(312) 00:12:00.978 fused_ordering(313) 00:12:00.978 fused_ordering(314) 00:12:00.978 fused_ordering(315) 00:12:00.978 fused_ordering(316) 00:12:00.978 fused_ordering(317) 00:12:00.978 fused_ordering(318) 00:12:00.978 fused_ordering(319) 00:12:00.978 fused_ordering(320) 00:12:00.978 fused_ordering(321) 00:12:00.978 fused_ordering(322) 00:12:00.978 fused_ordering(323) 00:12:00.978 fused_ordering(324) 00:12:00.978 fused_ordering(325) 00:12:00.978 fused_ordering(326) 00:12:00.978 fused_ordering(327) 00:12:00.978 fused_ordering(328) 00:12:00.978 fused_ordering(329) 00:12:00.978 fused_ordering(330) 00:12:00.978 fused_ordering(331) 00:12:00.978 fused_ordering(332) 00:12:00.978 fused_ordering(333) 00:12:00.978 fused_ordering(334) 00:12:00.978 fused_ordering(335) 00:12:00.978 fused_ordering(336) 00:12:00.979 fused_ordering(337) 00:12:00.979 fused_ordering(338) 00:12:00.979 fused_ordering(339) 00:12:00.979 fused_ordering(340) 00:12:00.979 fused_ordering(341) 00:12:00.979 fused_ordering(342) 00:12:00.979 fused_ordering(343) 00:12:00.979 fused_ordering(344) 00:12:00.979 fused_ordering(345) 00:12:00.979 fused_ordering(346) 00:12:00.979 fused_ordering(347) 00:12:00.979 fused_ordering(348) 00:12:00.979 fused_ordering(349) 00:12:00.979 fused_ordering(350) 00:12:00.979 fused_ordering(351) 00:12:00.979 fused_ordering(352) 00:12:00.979 fused_ordering(353) 00:12:00.979 fused_ordering(354) 00:12:00.979 fused_ordering(355) 00:12:00.979 fused_ordering(356) 00:12:00.979 fused_ordering(357) 00:12:00.979 fused_ordering(358) 00:12:00.979 fused_ordering(359) 00:12:00.979 fused_ordering(360) 00:12:00.979 fused_ordering(361) 00:12:00.979 fused_ordering(362) 00:12:00.979 fused_ordering(363) 00:12:00.979 fused_ordering(364) 00:12:00.979 fused_ordering(365) 00:12:00.979 fused_ordering(366) 00:12:00.979 fused_ordering(367) 00:12:00.979 fused_ordering(368) 00:12:00.979 fused_ordering(369) 00:12:00.979 fused_ordering(370) 00:12:00.979 fused_ordering(371) 00:12:00.979 fused_ordering(372) 00:12:00.979 fused_ordering(373) 00:12:00.979 fused_ordering(374) 00:12:00.979 fused_ordering(375) 00:12:00.979 fused_ordering(376) 00:12:00.979 fused_ordering(377) 00:12:00.979 fused_ordering(378) 00:12:00.979 fused_ordering(379) 00:12:00.979 fused_ordering(380) 00:12:00.979 fused_ordering(381) 00:12:00.979 fused_ordering(382) 00:12:00.979 fused_ordering(383) 00:12:00.979 fused_ordering(384) 00:12:00.979 fused_ordering(385) 00:12:00.979 fused_ordering(386) 00:12:00.979 fused_ordering(387) 00:12:00.979 fused_ordering(388) 00:12:00.979 fused_ordering(389) 00:12:00.979 fused_ordering(390) 00:12:00.979 fused_ordering(391) 00:12:00.979 fused_ordering(392) 00:12:00.979 fused_ordering(393) 00:12:00.979 fused_ordering(394) 00:12:00.979 fused_ordering(395) 00:12:00.979 fused_ordering(396) 00:12:00.979 fused_ordering(397) 00:12:00.979 fused_ordering(398) 00:12:00.979 fused_ordering(399) 00:12:00.979 fused_ordering(400) 00:12:00.979 fused_ordering(401) 00:12:00.979 fused_ordering(402) 00:12:00.979 fused_ordering(403) 00:12:00.979 fused_ordering(404) 00:12:00.979 fused_ordering(405) 00:12:00.979 fused_ordering(406) 00:12:00.979 fused_ordering(407) 00:12:00.979 fused_ordering(408) 00:12:00.979 fused_ordering(409) 00:12:00.979 fused_ordering(410) 00:12:01.240 fused_ordering(411) 00:12:01.240 fused_ordering(412) 00:12:01.240 fused_ordering(413) 00:12:01.240 fused_ordering(414) 00:12:01.240 fused_ordering(415) 00:12:01.240 fused_ordering(416) 00:12:01.240 fused_ordering(417) 00:12:01.240 fused_ordering(418) 00:12:01.240 fused_ordering(419) 00:12:01.240 fused_ordering(420) 00:12:01.240 fused_ordering(421) 00:12:01.240 fused_ordering(422) 00:12:01.240 fused_ordering(423) 00:12:01.240 fused_ordering(424) 00:12:01.240 fused_ordering(425) 00:12:01.240 fused_ordering(426) 00:12:01.240 fused_ordering(427) 00:12:01.240 fused_ordering(428) 00:12:01.240 fused_ordering(429) 00:12:01.240 fused_ordering(430) 00:12:01.240 fused_ordering(431) 00:12:01.240 fused_ordering(432) 00:12:01.240 fused_ordering(433) 00:12:01.240 fused_ordering(434) 00:12:01.240 fused_ordering(435) 00:12:01.240 fused_ordering(436) 00:12:01.240 fused_ordering(437) 00:12:01.240 fused_ordering(438) 00:12:01.240 fused_ordering(439) 00:12:01.240 fused_ordering(440) 00:12:01.240 fused_ordering(441) 00:12:01.240 fused_ordering(442) 00:12:01.240 fused_ordering(443) 00:12:01.240 fused_ordering(444) 00:12:01.240 fused_ordering(445) 00:12:01.240 fused_ordering(446) 00:12:01.240 fused_ordering(447) 00:12:01.240 fused_ordering(448) 00:12:01.240 fused_ordering(449) 00:12:01.240 fused_ordering(450) 00:12:01.240 fused_ordering(451) 00:12:01.240 fused_ordering(452) 00:12:01.240 fused_ordering(453) 00:12:01.240 fused_ordering(454) 00:12:01.240 fused_ordering(455) 00:12:01.240 fused_ordering(456) 00:12:01.240 fused_ordering(457) 00:12:01.240 fused_ordering(458) 00:12:01.240 fused_ordering(459) 00:12:01.240 fused_ordering(460) 00:12:01.240 fused_ordering(461) 00:12:01.240 fused_ordering(462) 00:12:01.240 fused_ordering(463) 00:12:01.240 fused_ordering(464) 00:12:01.240 fused_ordering(465) 00:12:01.240 fused_ordering(466) 00:12:01.240 fused_ordering(467) 00:12:01.240 fused_ordering(468) 00:12:01.240 fused_ordering(469) 00:12:01.240 fused_ordering(470) 00:12:01.240 fused_ordering(471) 00:12:01.240 fused_ordering(472) 00:12:01.240 fused_ordering(473) 00:12:01.240 fused_ordering(474) 00:12:01.240 fused_ordering(475) 00:12:01.240 fused_ordering(476) 00:12:01.240 fused_ordering(477) 00:12:01.240 fused_ordering(478) 00:12:01.240 fused_ordering(479) 00:12:01.240 fused_ordering(480) 00:12:01.240 fused_ordering(481) 00:12:01.240 fused_ordering(482) 00:12:01.240 fused_ordering(483) 00:12:01.240 fused_ordering(484) 00:12:01.240 fused_ordering(485) 00:12:01.240 fused_ordering(486) 00:12:01.240 fused_ordering(487) 00:12:01.240 fused_ordering(488) 00:12:01.240 fused_ordering(489) 00:12:01.240 fused_ordering(490) 00:12:01.240 fused_ordering(491) 00:12:01.240 fused_ordering(492) 00:12:01.240 fused_ordering(493) 00:12:01.240 fused_ordering(494) 00:12:01.240 fused_ordering(495) 00:12:01.240 fused_ordering(496) 00:12:01.240 fused_ordering(497) 00:12:01.240 fused_ordering(498) 00:12:01.240 fused_ordering(499) 00:12:01.240 fused_ordering(500) 00:12:01.240 fused_ordering(501) 00:12:01.240 fused_ordering(502) 00:12:01.240 fused_ordering(503) 00:12:01.240 fused_ordering(504) 00:12:01.240 fused_ordering(505) 00:12:01.240 fused_ordering(506) 00:12:01.240 fused_ordering(507) 00:12:01.240 fused_ordering(508) 00:12:01.240 fused_ordering(509) 00:12:01.240 fused_ordering(510) 00:12:01.240 fused_ordering(511) 00:12:01.240 fused_ordering(512) 00:12:01.240 fused_ordering(513) 00:12:01.240 fused_ordering(514) 00:12:01.240 fused_ordering(515) 00:12:01.240 fused_ordering(516) 00:12:01.240 fused_ordering(517) 00:12:01.240 fused_ordering(518) 00:12:01.240 fused_ordering(519) 00:12:01.240 fused_ordering(520) 00:12:01.240 fused_ordering(521) 00:12:01.240 fused_ordering(522) 00:12:01.240 fused_ordering(523) 00:12:01.240 fused_ordering(524) 00:12:01.240 fused_ordering(525) 00:12:01.240 fused_ordering(526) 00:12:01.240 fused_ordering(527) 00:12:01.240 fused_ordering(528) 00:12:01.240 fused_ordering(529) 00:12:01.240 fused_ordering(530) 00:12:01.240 fused_ordering(531) 00:12:01.240 fused_ordering(532) 00:12:01.240 fused_ordering(533) 00:12:01.240 fused_ordering(534) 00:12:01.240 fused_ordering(535) 00:12:01.240 fused_ordering(536) 00:12:01.240 fused_ordering(537) 00:12:01.240 fused_ordering(538) 00:12:01.240 fused_ordering(539) 00:12:01.240 fused_ordering(540) 00:12:01.240 fused_ordering(541) 00:12:01.240 fused_ordering(542) 00:12:01.240 fused_ordering(543) 00:12:01.240 fused_ordering(544) 00:12:01.240 fused_ordering(545) 00:12:01.240 fused_ordering(546) 00:12:01.240 fused_ordering(547) 00:12:01.240 fused_ordering(548) 00:12:01.240 fused_ordering(549) 00:12:01.240 fused_ordering(550) 00:12:01.240 fused_ordering(551) 00:12:01.240 fused_ordering(552) 00:12:01.240 fused_ordering(553) 00:12:01.240 fused_ordering(554) 00:12:01.240 fused_ordering(555) 00:12:01.240 fused_ordering(556) 00:12:01.240 fused_ordering(557) 00:12:01.240 fused_ordering(558) 00:12:01.240 fused_ordering(559) 00:12:01.240 fused_ordering(560) 00:12:01.240 fused_ordering(561) 00:12:01.240 fused_ordering(562) 00:12:01.240 fused_ordering(563) 00:12:01.240 fused_ordering(564) 00:12:01.240 fused_ordering(565) 00:12:01.240 fused_ordering(566) 00:12:01.240 fused_ordering(567) 00:12:01.240 fused_ordering(568) 00:12:01.240 fused_ordering(569) 00:12:01.240 fused_ordering(570) 00:12:01.240 fused_ordering(571) 00:12:01.240 fused_ordering(572) 00:12:01.240 fused_ordering(573) 00:12:01.240 fused_ordering(574) 00:12:01.240 fused_ordering(575) 00:12:01.240 fused_ordering(576) 00:12:01.240 fused_ordering(577) 00:12:01.240 fused_ordering(578) 00:12:01.240 fused_ordering(579) 00:12:01.240 fused_ordering(580) 00:12:01.240 fused_ordering(581) 00:12:01.240 fused_ordering(582) 00:12:01.240 fused_ordering(583) 00:12:01.240 fused_ordering(584) 00:12:01.240 fused_ordering(585) 00:12:01.240 fused_ordering(586) 00:12:01.240 fused_ordering(587) 00:12:01.240 fused_ordering(588) 00:12:01.240 fused_ordering(589) 00:12:01.240 fused_ordering(590) 00:12:01.240 fused_ordering(591) 00:12:01.240 fused_ordering(592) 00:12:01.240 fused_ordering(593) 00:12:01.240 fused_ordering(594) 00:12:01.240 fused_ordering(595) 00:12:01.240 fused_ordering(596) 00:12:01.240 fused_ordering(597) 00:12:01.240 fused_ordering(598) 00:12:01.240 fused_ordering(599) 00:12:01.240 fused_ordering(600) 00:12:01.240 fused_ordering(601) 00:12:01.240 fused_ordering(602) 00:12:01.240 fused_ordering(603) 00:12:01.240 fused_ordering(604) 00:12:01.240 fused_ordering(605) 00:12:01.240 fused_ordering(606) 00:12:01.240 fused_ordering(607) 00:12:01.240 fused_ordering(608) 00:12:01.240 fused_ordering(609) 00:12:01.240 fused_ordering(610) 00:12:01.240 fused_ordering(611) 00:12:01.240 fused_ordering(612) 00:12:01.240 fused_ordering(613) 00:12:01.240 fused_ordering(614) 00:12:01.240 fused_ordering(615) 00:12:01.812 fused_ordering(616) 00:12:01.812 fused_ordering(617) 00:12:01.812 fused_ordering(618) 00:12:01.812 fused_ordering(619) 00:12:01.812 fused_ordering(620) 00:12:01.812 fused_ordering(621) 00:12:01.812 fused_ordering(622) 00:12:01.812 fused_ordering(623) 00:12:01.812 fused_ordering(624) 00:12:01.812 fused_ordering(625) 00:12:01.812 fused_ordering(626) 00:12:01.812 fused_ordering(627) 00:12:01.812 fused_ordering(628) 00:12:01.812 fused_ordering(629) 00:12:01.812 fused_ordering(630) 00:12:01.812 fused_ordering(631) 00:12:01.812 fused_ordering(632) 00:12:01.812 fused_ordering(633) 00:12:01.812 fused_ordering(634) 00:12:01.812 fused_ordering(635) 00:12:01.812 fused_ordering(636) 00:12:01.812 fused_ordering(637) 00:12:01.812 fused_ordering(638) 00:12:01.812 fused_ordering(639) 00:12:01.812 fused_ordering(640) 00:12:01.812 fused_ordering(641) 00:12:01.812 fused_ordering(642) 00:12:01.812 fused_ordering(643) 00:12:01.812 fused_ordering(644) 00:12:01.812 fused_ordering(645) 00:12:01.812 fused_ordering(646) 00:12:01.812 fused_ordering(647) 00:12:01.812 fused_ordering(648) 00:12:01.812 fused_ordering(649) 00:12:01.812 fused_ordering(650) 00:12:01.812 fused_ordering(651) 00:12:01.812 fused_ordering(652) 00:12:01.812 fused_ordering(653) 00:12:01.812 fused_ordering(654) 00:12:01.812 fused_ordering(655) 00:12:01.812 fused_ordering(656) 00:12:01.812 fused_ordering(657) 00:12:01.812 fused_ordering(658) 00:12:01.812 fused_ordering(659) 00:12:01.812 fused_ordering(660) 00:12:01.812 fused_ordering(661) 00:12:01.812 fused_ordering(662) 00:12:01.812 fused_ordering(663) 00:12:01.812 fused_ordering(664) 00:12:01.812 fused_ordering(665) 00:12:01.812 fused_ordering(666) 00:12:01.812 fused_ordering(667) 00:12:01.812 fused_ordering(668) 00:12:01.812 fused_ordering(669) 00:12:01.812 fused_ordering(670) 00:12:01.812 fused_ordering(671) 00:12:01.812 fused_ordering(672) 00:12:01.812 fused_ordering(673) 00:12:01.812 fused_ordering(674) 00:12:01.812 fused_ordering(675) 00:12:01.812 fused_ordering(676) 00:12:01.812 fused_ordering(677) 00:12:01.812 fused_ordering(678) 00:12:01.812 fused_ordering(679) 00:12:01.812 fused_ordering(680) 00:12:01.812 fused_ordering(681) 00:12:01.812 fused_ordering(682) 00:12:01.812 fused_ordering(683) 00:12:01.812 fused_ordering(684) 00:12:01.812 fused_ordering(685) 00:12:01.812 fused_ordering(686) 00:12:01.812 fused_ordering(687) 00:12:01.812 fused_ordering(688) 00:12:01.812 fused_ordering(689) 00:12:01.812 fused_ordering(690) 00:12:01.812 fused_ordering(691) 00:12:01.812 fused_ordering(692) 00:12:01.812 fused_ordering(693) 00:12:01.812 fused_ordering(694) 00:12:01.812 fused_ordering(695) 00:12:01.812 fused_ordering(696) 00:12:01.812 fused_ordering(697) 00:12:01.812 fused_ordering(698) 00:12:01.812 fused_ordering(699) 00:12:01.812 fused_ordering(700) 00:12:01.812 fused_ordering(701) 00:12:01.812 fused_ordering(702) 00:12:01.812 fused_ordering(703) 00:12:01.812 fused_ordering(704) 00:12:01.812 fused_ordering(705) 00:12:01.812 fused_ordering(706) 00:12:01.812 fused_ordering(707) 00:12:01.812 fused_ordering(708) 00:12:01.812 fused_ordering(709) 00:12:01.812 fused_ordering(710) 00:12:01.812 fused_ordering(711) 00:12:01.812 fused_ordering(712) 00:12:01.813 fused_ordering(713) 00:12:01.813 fused_ordering(714) 00:12:01.813 fused_ordering(715) 00:12:01.813 fused_ordering(716) 00:12:01.813 fused_ordering(717) 00:12:01.813 fused_ordering(718) 00:12:01.813 fused_ordering(719) 00:12:01.813 fused_ordering(720) 00:12:01.813 fused_ordering(721) 00:12:01.813 fused_ordering(722) 00:12:01.813 fused_ordering(723) 00:12:01.813 fused_ordering(724) 00:12:01.813 fused_ordering(725) 00:12:01.813 fused_ordering(726) 00:12:01.813 fused_ordering(727) 00:12:01.813 fused_ordering(728) 00:12:01.813 fused_ordering(729) 00:12:01.813 fused_ordering(730) 00:12:01.813 fused_ordering(731) 00:12:01.813 fused_ordering(732) 00:12:01.813 fused_ordering(733) 00:12:01.813 fused_ordering(734) 00:12:01.813 fused_ordering(735) 00:12:01.813 fused_ordering(736) 00:12:01.813 fused_ordering(737) 00:12:01.813 fused_ordering(738) 00:12:01.813 fused_ordering(739) 00:12:01.813 fused_ordering(740) 00:12:01.813 fused_ordering(741) 00:12:01.813 fused_ordering(742) 00:12:01.813 fused_ordering(743) 00:12:01.813 fused_ordering(744) 00:12:01.813 fused_ordering(745) 00:12:01.813 fused_ordering(746) 00:12:01.813 fused_ordering(747) 00:12:01.813 fused_ordering(748) 00:12:01.813 fused_ordering(749) 00:12:01.813 fused_ordering(750) 00:12:01.813 fused_ordering(751) 00:12:01.813 fused_ordering(752) 00:12:01.813 fused_ordering(753) 00:12:01.813 fused_ordering(754) 00:12:01.813 fused_ordering(755) 00:12:01.813 fused_ordering(756) 00:12:01.813 fused_ordering(757) 00:12:01.813 fused_ordering(758) 00:12:01.813 fused_ordering(759) 00:12:01.813 fused_ordering(760) 00:12:01.813 fused_ordering(761) 00:12:01.813 fused_ordering(762) 00:12:01.813 fused_ordering(763) 00:12:01.813 fused_ordering(764) 00:12:01.813 fused_ordering(765) 00:12:01.813 fused_ordering(766) 00:12:01.813 fused_ordering(767) 00:12:01.813 fused_ordering(768) 00:12:01.813 fused_ordering(769) 00:12:01.813 fused_ordering(770) 00:12:01.813 fused_ordering(771) 00:12:01.813 fused_ordering(772) 00:12:01.813 fused_ordering(773) 00:12:01.813 fused_ordering(774) 00:12:01.813 fused_ordering(775) 00:12:01.813 fused_ordering(776) 00:12:01.813 fused_ordering(777) 00:12:01.813 fused_ordering(778) 00:12:01.813 fused_ordering(779) 00:12:01.813 fused_ordering(780) 00:12:01.813 fused_ordering(781) 00:12:01.813 fused_ordering(782) 00:12:01.813 fused_ordering(783) 00:12:01.813 fused_ordering(784) 00:12:01.813 fused_ordering(785) 00:12:01.813 fused_ordering(786) 00:12:01.813 fused_ordering(787) 00:12:01.813 fused_ordering(788) 00:12:01.813 fused_ordering(789) 00:12:01.813 fused_ordering(790) 00:12:01.813 fused_ordering(791) 00:12:01.813 fused_ordering(792) 00:12:01.813 fused_ordering(793) 00:12:01.813 fused_ordering(794) 00:12:01.813 fused_ordering(795) 00:12:01.813 fused_ordering(796) 00:12:01.813 fused_ordering(797) 00:12:01.813 fused_ordering(798) 00:12:01.813 fused_ordering(799) 00:12:01.813 fused_ordering(800) 00:12:01.813 fused_ordering(801) 00:12:01.813 fused_ordering(802) 00:12:01.813 fused_ordering(803) 00:12:01.813 fused_ordering(804) 00:12:01.813 fused_ordering(805) 00:12:01.813 fused_ordering(806) 00:12:01.813 fused_ordering(807) 00:12:01.813 fused_ordering(808) 00:12:01.813 fused_ordering(809) 00:12:01.813 fused_ordering(810) 00:12:01.813 fused_ordering(811) 00:12:01.813 fused_ordering(812) 00:12:01.813 fused_ordering(813) 00:12:01.813 fused_ordering(814) 00:12:01.813 fused_ordering(815) 00:12:01.813 fused_ordering(816) 00:12:01.813 fused_ordering(817) 00:12:01.813 fused_ordering(818) 00:12:01.813 fused_ordering(819) 00:12:01.813 fused_ordering(820) 00:12:02.384 fused_ordering(821) 00:12:02.384 fused_ordering(822) 00:12:02.385 fused_ordering(823) 00:12:02.385 fused_ordering(824) 00:12:02.385 fused_ordering(825) 00:12:02.385 fused_ordering(826) 00:12:02.385 fused_ordering(827) 00:12:02.385 fused_ordering(828) 00:12:02.385 fused_ordering(829) 00:12:02.385 fused_ordering(830) 00:12:02.385 fused_ordering(831) 00:12:02.385 fused_ordering(832) 00:12:02.385 fused_ordering(833) 00:12:02.385 fused_ordering(834) 00:12:02.385 fused_ordering(835) 00:12:02.385 fused_ordering(836) 00:12:02.385 fused_ordering(837) 00:12:02.385 fused_ordering(838) 00:12:02.385 fused_ordering(839) 00:12:02.385 fused_ordering(840) 00:12:02.385 fused_ordering(841) 00:12:02.385 fused_ordering(842) 00:12:02.385 fused_ordering(843) 00:12:02.385 fused_ordering(844) 00:12:02.385 fused_ordering(845) 00:12:02.385 fused_ordering(846) 00:12:02.385 fused_ordering(847) 00:12:02.385 fused_ordering(848) 00:12:02.385 fused_ordering(849) 00:12:02.385 fused_ordering(850) 00:12:02.385 fused_ordering(851) 00:12:02.385 fused_ordering(852) 00:12:02.385 fused_ordering(853) 00:12:02.385 fused_ordering(854) 00:12:02.385 fused_ordering(855) 00:12:02.385 fused_ordering(856) 00:12:02.385 fused_ordering(857) 00:12:02.385 fused_ordering(858) 00:12:02.385 fused_ordering(859) 00:12:02.385 fused_ordering(860) 00:12:02.385 fused_ordering(861) 00:12:02.385 fused_ordering(862) 00:12:02.385 fused_ordering(863) 00:12:02.385 fused_ordering(864) 00:12:02.385 fused_ordering(865) 00:12:02.385 fused_ordering(866) 00:12:02.385 fused_ordering(867) 00:12:02.385 fused_ordering(868) 00:12:02.385 fused_ordering(869) 00:12:02.385 fused_ordering(870) 00:12:02.385 fused_ordering(871) 00:12:02.385 fused_ordering(872) 00:12:02.385 fused_ordering(873) 00:12:02.385 fused_ordering(874) 00:12:02.385 fused_ordering(875) 00:12:02.385 fused_ordering(876) 00:12:02.385 fused_ordering(877) 00:12:02.385 fused_ordering(878) 00:12:02.385 fused_ordering(879) 00:12:02.385 fused_ordering(880) 00:12:02.385 fused_ordering(881) 00:12:02.385 fused_ordering(882) 00:12:02.385 fused_ordering(883) 00:12:02.385 fused_ordering(884) 00:12:02.385 fused_ordering(885) 00:12:02.385 fused_ordering(886) 00:12:02.385 fused_ordering(887) 00:12:02.385 fused_ordering(888) 00:12:02.385 fused_ordering(889) 00:12:02.385 fused_ordering(890) 00:12:02.385 fused_ordering(891) 00:12:02.385 fused_ordering(892) 00:12:02.385 fused_ordering(893) 00:12:02.385 fused_ordering(894) 00:12:02.385 fused_ordering(895) 00:12:02.385 fused_ordering(896) 00:12:02.385 fused_ordering(897) 00:12:02.385 fused_ordering(898) 00:12:02.385 fused_ordering(899) 00:12:02.385 fused_ordering(900) 00:12:02.385 fused_ordering(901) 00:12:02.385 fused_ordering(902) 00:12:02.385 fused_ordering(903) 00:12:02.385 fused_ordering(904) 00:12:02.385 fused_ordering(905) 00:12:02.385 fused_ordering(906) 00:12:02.385 fused_ordering(907) 00:12:02.385 fused_ordering(908) 00:12:02.385 fused_ordering(909) 00:12:02.385 fused_ordering(910) 00:12:02.385 fused_ordering(911) 00:12:02.385 fused_ordering(912) 00:12:02.385 fused_ordering(913) 00:12:02.385 fused_ordering(914) 00:12:02.385 fused_ordering(915) 00:12:02.385 fused_ordering(916) 00:12:02.385 fused_ordering(917) 00:12:02.385 fused_ordering(918) 00:12:02.385 fused_ordering(919) 00:12:02.385 fused_ordering(920) 00:12:02.385 fused_ordering(921) 00:12:02.385 fused_ordering(922) 00:12:02.385 fused_ordering(923) 00:12:02.385 fused_ordering(924) 00:12:02.385 fused_ordering(925) 00:12:02.385 fused_ordering(926) 00:12:02.385 fused_ordering(927) 00:12:02.385 fused_ordering(928) 00:12:02.385 fused_ordering(929) 00:12:02.385 fused_ordering(930) 00:12:02.385 fused_ordering(931) 00:12:02.385 fused_ordering(932) 00:12:02.385 fused_ordering(933) 00:12:02.385 fused_ordering(934) 00:12:02.385 fused_ordering(935) 00:12:02.385 fused_ordering(936) 00:12:02.385 fused_ordering(937) 00:12:02.385 fused_ordering(938) 00:12:02.385 fused_ordering(939) 00:12:02.385 fused_ordering(940) 00:12:02.385 fused_ordering(941) 00:12:02.385 fused_ordering(942) 00:12:02.385 fused_ordering(943) 00:12:02.385 fused_ordering(944) 00:12:02.385 fused_ordering(945) 00:12:02.385 fused_ordering(946) 00:12:02.385 fused_ordering(947) 00:12:02.385 fused_ordering(948) 00:12:02.385 fused_ordering(949) 00:12:02.385 fused_ordering(950) 00:12:02.385 fused_ordering(951) 00:12:02.385 fused_ordering(952) 00:12:02.385 fused_ordering(953) 00:12:02.385 fused_ordering(954) 00:12:02.385 fused_ordering(955) 00:12:02.385 fused_ordering(956) 00:12:02.385 fused_ordering(957) 00:12:02.385 fused_ordering(958) 00:12:02.385 fused_ordering(959) 00:12:02.385 fused_ordering(960) 00:12:02.385 fused_ordering(961) 00:12:02.385 fused_ordering(962) 00:12:02.385 fused_ordering(963) 00:12:02.385 fused_ordering(964) 00:12:02.385 fused_ordering(965) 00:12:02.385 fused_ordering(966) 00:12:02.385 fused_ordering(967) 00:12:02.385 fused_ordering(968) 00:12:02.385 fused_ordering(969) 00:12:02.385 fused_ordering(970) 00:12:02.385 fused_ordering(971) 00:12:02.385 fused_ordering(972) 00:12:02.385 fused_ordering(973) 00:12:02.385 fused_ordering(974) 00:12:02.385 fused_ordering(975) 00:12:02.385 fused_ordering(976) 00:12:02.385 fused_ordering(977) 00:12:02.385 fused_ordering(978) 00:12:02.385 fused_ordering(979) 00:12:02.385 fused_ordering(980) 00:12:02.385 fused_ordering(981) 00:12:02.385 fused_ordering(982) 00:12:02.385 fused_ordering(983) 00:12:02.385 fused_ordering(984) 00:12:02.385 fused_ordering(985) 00:12:02.385 fused_ordering(986) 00:12:02.385 fused_ordering(987) 00:12:02.385 fused_ordering(988) 00:12:02.385 fused_ordering(989) 00:12:02.385 fused_ordering(990) 00:12:02.385 fused_ordering(991) 00:12:02.385 fused_ordering(992) 00:12:02.385 fused_ordering(993) 00:12:02.385 fused_ordering(994) 00:12:02.385 fused_ordering(995) 00:12:02.385 fused_ordering(996) 00:12:02.385 fused_ordering(997) 00:12:02.385 fused_ordering(998) 00:12:02.385 fused_ordering(999) 00:12:02.385 fused_ordering(1000) 00:12:02.385 fused_ordering(1001) 00:12:02.385 fused_ordering(1002) 00:12:02.385 fused_ordering(1003) 00:12:02.385 fused_ordering(1004) 00:12:02.385 fused_ordering(1005) 00:12:02.385 fused_ordering(1006) 00:12:02.385 fused_ordering(1007) 00:12:02.385 fused_ordering(1008) 00:12:02.385 fused_ordering(1009) 00:12:02.385 fused_ordering(1010) 00:12:02.385 fused_ordering(1011) 00:12:02.385 fused_ordering(1012) 00:12:02.385 fused_ordering(1013) 00:12:02.385 fused_ordering(1014) 00:12:02.385 fused_ordering(1015) 00:12:02.385 fused_ordering(1016) 00:12:02.385 fused_ordering(1017) 00:12:02.385 fused_ordering(1018) 00:12:02.385 fused_ordering(1019) 00:12:02.385 fused_ordering(1020) 00:12:02.385 fused_ordering(1021) 00:12:02.385 fused_ordering(1022) 00:12:02.385 fused_ordering(1023) 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.385 rmmod nvme_tcp 00:12:02.385 rmmod nvme_fabrics 00:12:02.385 rmmod nvme_keyring 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1251093 ']' 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1251093 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1251093 ']' 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1251093 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1251093 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1251093' 00:12:02.385 killing process with pid 1251093 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1251093 00:12:02.385 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1251093 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.645 13:58:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.555 13:58:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.555 00:12:04.555 real 0m13.627s 00:12:04.555 user 0m6.883s 00:12:04.555 sys 0m7.332s 00:12:04.555 13:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.555 13:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.555 ************************************ 00:12:04.555 END TEST nvmf_fused_ordering 00:12:04.555 ************************************ 00:12:04.555 13:58:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:04.555 13:58:02 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:04.556 13:58:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:04.556 13:58:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.556 13:58:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.817 ************************************ 00:12:04.817 START TEST nvmf_delete_subsystem 00:12:04.817 ************************************ 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:04.817 * Looking for test storage... 00:12:04.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.817 13:58:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:12.954 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:12.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:12.954 Found net devices under 0000:31:00.0: cvl_0_0 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:12.954 Found net devices under 0000:31:00.1: cvl_0_1 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:12.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:12:12.954 00:12:12.954 --- 10.0.0.2 ping statistics --- 00:12:12.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.954 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:12:12.954 00:12:12.954 --- 10.0.0.1 ping statistics --- 00:12:12.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.954 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1256459 00:12:12.954 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1256459 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1256459 ']' 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.955 13:58:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.955 [2024-07-15 13:58:10.903976] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:12.955 [2024-07-15 13:58:10.904014] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.955 [2024-07-15 13:58:10.966182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.955 [2024-07-15 13:58:11.030970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.955 [2024-07-15 13:58:11.031005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.955 [2024-07-15 13:58:11.031013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.955 [2024-07-15 13:58:11.031019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.955 [2024-07-15 13:58:11.031024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.955 [2024-07-15 13:58:11.031157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.955 [2024-07-15 13:58:11.031158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 [2024-07-15 13:58:11.745968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 [2024-07-15 13:58:11.770163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 NULL1 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 Delay0 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1256802 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:13.896 13:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:13.896 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.896 [2024-07-15 13:58:11.866771] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:15.808 13:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.808 13:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.808 13:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 [2024-07-15 13:58:13.989785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x578650 is same with the state(5) to be set 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.070 Read completed with error (sct=0, sc=8) 00:12:16.070 starting I/O failed: -6 00:12:16.070 Write completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 [2024-07-15 13:58:13.994626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe27c000c00 is same with the state(5) to be set 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Write completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:16.071 Read completed with error (sct=0, sc=8) 00:12:17.013 [2024-07-15 13:58:14.964725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x554500 is same with the state(5) to be set 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 [2024-07-15 13:58:14.993133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575cb0 is same with the state(5) to be set 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 [2024-07-15 13:58:14.993525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x574d00 is same with the state(5) to be set 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 [2024-07-15 13:58:14.996505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe27c00d760 is same with the state(5) to be set 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Write completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 Read completed with error (sct=0, sc=8) 00:12:17.013 [2024-07-15 13:58:14.996609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe27c00cfe0 is same with the state(5) to be set 00:12:17.013 Initializing NVMe Controllers 00:12:17.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.013 Controller IO queue size 128, less than required. 00:12:17.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:17.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:17.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:17.013 Initialization complete. Launching workers. 00:12:17.013 ======================================================== 00:12:17.013 Latency(us) 00:12:17.013 Device Information : IOPS MiB/s Average min max 00:12:17.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.91 0.08 896594.75 239.88 1005656.28 00:12:17.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.95 0.08 924433.05 281.93 1044548.63 00:12:17.013 ======================================================== 00:12:17.013 Total : 326.86 0.16 910047.10 239.88 1044548.63 00:12:17.013 00:12:17.013 [2024-07-15 13:58:14.997249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x554500 (9): Bad file descriptor 00:12:17.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:17.013 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.013 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:17.013 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1256802 00:12:17.014 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1256802 00:12:17.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1256802) - No such process 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1256802 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1256802 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1256802 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.585 [2024-07-15 13:58:15.529947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1257498 00:12:17.585 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:17.586 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:17.586 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:17.586 13:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:17.586 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.586 [2024-07-15 13:58:15.597887] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:18.157 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.157 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:18.157 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:18.728 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.728 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:18.728 13:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:18.989 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.989 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:18.989 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.558 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:19.558 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:19.558 13:58:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.129 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.129 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:20.129 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.699 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.699 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:20.699 13:58:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.959 Initializing NVMe Controllers 00:12:20.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.959 Controller IO queue size 128, less than required. 00:12:20.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:20.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:20.959 Initialization complete. Launching workers. 00:12:20.959 ======================================================== 00:12:20.959 Latency(us) 00:12:20.959 Device Information : IOPS MiB/s Average min max 00:12:20.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002187.15 1000180.57 1040918.52 00:12:20.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002838.39 1000147.20 1009528.08 00:12:20.959 ======================================================== 00:12:20.959 Total : 256.00 0.12 1002512.77 1000147.20 1040918.52 00:12:20.959 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257498 00:12:21.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1257498) - No such process 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1257498 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.219 rmmod nvme_tcp 00:12:21.219 rmmod nvme_fabrics 00:12:21.219 rmmod nvme_keyring 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:21.219 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1256459 ']' 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1256459 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1256459 ']' 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1256459 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1256459 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1256459' 00:12:21.220 killing process with pid 1256459 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1256459 00:12:21.220 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1256459 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.480 13:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.389 13:58:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.389 00:12:23.389 real 0m18.731s 00:12:23.389 user 0m31.071s 00:12:23.389 sys 0m6.825s 00:12:23.389 13:58:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.389 13:58:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.389 ************************************ 00:12:23.389 END TEST nvmf_delete_subsystem 00:12:23.389 ************************************ 00:12:23.389 13:58:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:23.389 13:58:21 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:23.389 13:58:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.389 13:58:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.389 13:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.389 ************************************ 00:12:23.389 START TEST nvmf_ns_masking 00:12:23.389 ************************************ 00:12:23.389 13:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:23.656 * Looking for test storage... 00:12:23.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.656 13:58:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0d42cd64-5b99-432e-b7d0-5872a0ef42b7 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=54e062fb-9e73-4c00-8a34-773f00652e34 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9d10f94d-c020-41df-9d23-10cdd7ac5c1e 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.657 13:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:31.904 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:31.904 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:31.904 Found net devices under 0000:31:00.0: cvl_0_0 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:31.904 Found net devices under 0000:31:00.1: cvl_0_1 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:12:31.904 00:12:31.904 --- 10.0.0.2 ping statistics --- 00:12:31.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.904 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:31.904 00:12:31.904 --- 10.0.0.1 ping statistics --- 00:12:31.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.904 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.904 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1262861 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1262861 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1262861 ']' 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.905 13:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.905 [2024-07-15 13:58:29.733561] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:31.905 [2024-07-15 13:58:29.733608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.905 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.905 [2024-07-15 13:58:29.806160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.905 [2024-07-15 13:58:29.870279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.905 [2024-07-15 13:58:29.870313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.905 [2024-07-15 13:58:29.870322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.905 [2024-07-15 13:58:29.870328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.905 [2024-07-15 13:58:29.870334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.905 [2024-07-15 13:58:29.870359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.475 13:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.736 [2024-07-15 13:58:30.681010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.736 13:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:32.736 13:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:32.736 13:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.996 Malloc1 00:12:32.996 13:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.996 Malloc2 00:12:32.996 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.257 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:33.257 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.518 [2024-07-15 13:58:31.468661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.518 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:33.518 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d10f94d-c020-41df-9d23-10cdd7ac5c1e -a 10.0.0.2 -s 4420 -i 4 00:12:33.779 13:58:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.779 13:58:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.779 13:58:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.779 13:58:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.779 13:58:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.691 [ 0]:0x1 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e985a664efa4c96a80163d6f481b3c4 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e985a664efa4c96a80163d6f481b3c4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.691 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.951 [ 0]:0x1 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e985a664efa4c96a80163d6f481b3c4 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e985a664efa4c96a80163d6f481b3c4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.951 [ 1]:0x2 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.951 13:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.951 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:35.951 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.951 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:35.951 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.212 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.473 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d10f94d-c020-41df-9d23-10cdd7ac5c1e -a 10.0.0.2 -s 4420 -i 4 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:36.734 13:58:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.646 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.646 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.646 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.646 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.646 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.647 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:38.647 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:38.647 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:38.906 [ 0]:0x2 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.906 13:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.166 [ 0]:0x1 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e985a664efa4c96a80163d6f481b3c4 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e985a664efa4c96a80163d6f481b3c4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.166 [ 1]:0x2 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:39.166 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.167 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.434 [ 0]:0x2 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.434 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d10f94d-c020-41df-9d23-10cdd7ac5c1e -a 10.0.0.2 -s 4420 -i 4 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:39.694 13:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.238 [ 0]:0x1 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e985a664efa4c96a80163d6f481b3c4 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e985a664efa4c96a80163d6f481b3c4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:42.238 [ 1]:0x2 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.238 13:58:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.238 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.239 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.262 [ 0]:0x2 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:42.262 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.523 [2024-07-15 13:58:40.373863] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:42.523 request: 00:12:42.523 { 00:12:42.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.523 "nsid": 2, 00:12:42.523 "host": "nqn.2016-06.io.spdk:host1", 00:12:42.523 "method": "nvmf_ns_remove_host", 00:12:42.523 "req_id": 1 00:12:42.523 } 00:12:42.523 Got JSON-RPC error response 00:12:42.523 response: 00:12:42.523 { 00:12:42.523 "code": -32602, 00:12:42.523 "message": "Invalid parameters" 00:12:42.523 } 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:42.523 [ 0]:0x2 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85ca67c4854740ba868578fc7a00f61b 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85ca67c4854740ba868578fc7a00f61b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:42.523 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1265166 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1265166 /var/tmp/host.sock 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1265166 ']' 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.784 13:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.784 [2024-07-15 13:58:40.738527] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:42.784 [2024-07-15 13:58:40.738577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265166 ] 00:12:42.784 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.784 [2024-07-15 13:58:40.819894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.784 [2024-07-15 13:58:40.883929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0d42cd64-5b99-432e-b7d0-5872a0ef42b7 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:43.725 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0D42CD645B99432EB7D05872A0EF42B7 -i 00:12:43.985 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 54e062fb-9e73-4c00-8a34-773f00652e34 00:12:43.985 13:58:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:43.985 13:58:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 54E062FB9E734C008A34773F00652E34 -i 00:12:44.245 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.245 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:44.505 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:44.505 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:44.765 nvme0n1 00:12:44.765 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:44.766 13:58:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:45.025 nvme1n2 00:12:45.025 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:45.025 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:45.025 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:45.025 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:45.025 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0d42cd64-5b99-432e-b7d0-5872a0ef42b7 == \0\d\4\2\c\d\6\4\-\5\b\9\9\-\4\3\2\e\-\b\7\d\0\-\5\8\7\2\a\0\e\f\4\2\b\7 ]] 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:45.286 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 54e062fb-9e73-4c00-8a34-773f00652e34 == \5\4\e\0\6\2\f\b\-\9\e\7\3\-\4\c\0\0\-\8\a\3\4\-\7\7\3\f\0\0\6\5\2\e\3\4 ]] 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1265166 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1265166 ']' 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1265166 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1265166 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:45.547 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1265166' 00:12:45.548 killing process with pid 1265166 00:12:45.548 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1265166 00:12:45.548 13:58:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1265166 00:12:45.808 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.069 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:46.069 13:58:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:46.069 13:58:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.069 13:58:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.069 rmmod nvme_tcp 00:12:46.069 rmmod nvme_fabrics 00:12:46.069 rmmod nvme_keyring 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1262861 ']' 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1262861 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1262861 ']' 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1262861 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1262861 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1262861' 00:12:46.069 killing process with pid 1262861 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1262861 00:12:46.069 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1262861 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.329 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.330 13:58:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.878 13:58:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:48.878 00:12:48.878 real 0m24.881s 00:12:48.878 user 0m24.037s 00:12:48.878 sys 0m7.932s 00:12:48.878 13:58:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.878 13:58:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:48.878 ************************************ 00:12:48.878 END TEST nvmf_ns_masking 00:12:48.878 ************************************ 00:12:48.878 13:58:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:48.878 13:58:46 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:48.878 13:58:46 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:48.878 13:58:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:48.878 13:58:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.878 13:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.878 ************************************ 00:12:48.878 START TEST nvmf_nvme_cli 00:12:48.878 ************************************ 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:48.878 * Looking for test storage... 00:12:48.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.878 13:58:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.879 13:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:57.023 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:57.023 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:57.024 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:57.024 Found net devices under 0000:31:00.0: cvl_0_0 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:57.024 Found net devices under 0000:31:00.1: cvl_0_1 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:12:57.024 00:12:57.024 --- 10.0.0.2 ping statistics --- 00:12:57.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.024 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:12:57.024 00:12:57.024 --- 10.0.0.1 ping statistics --- 00:12:57.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.024 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1270673 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1270673 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1270673 ']' 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.024 13:58:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.024 [2024-07-15 13:58:54.817502] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:57.024 [2024-07-15 13:58:54.817553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.024 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.024 [2024-07-15 13:58:54.893994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.024 [2024-07-15 13:58:54.964406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.024 [2024-07-15 13:58:54.964444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.024 [2024-07-15 13:58:54.964451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.024 [2024-07-15 13:58:54.964457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.024 [2024-07-15 13:58:54.964463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.024 [2024-07-15 13:58:54.964610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.024 [2024-07-15 13:58:54.964717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.024 [2024-07-15 13:58:54.964873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.024 [2024-07-15 13:58:54.964990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.597 [2024-07-15 13:58:55.641373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.597 Malloc0 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.597 Malloc1 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.597 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.859 [2024-07-15 13:58:55.731187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:12:57.859 00:12:57.859 Discovery Log Number of Records 2, Generation counter 2 00:12:57.859 =====Discovery Log Entry 0====== 00:12:57.859 trtype: tcp 00:12:57.859 adrfam: ipv4 00:12:57.859 subtype: current discovery subsystem 00:12:57.859 treq: not required 00:12:57.859 portid: 0 00:12:57.859 trsvcid: 4420 00:12:57.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:57.859 traddr: 10.0.0.2 00:12:57.859 eflags: explicit discovery connections, duplicate discovery information 00:12:57.859 sectype: none 00:12:57.859 =====Discovery Log Entry 1====== 00:12:57.859 trtype: tcp 00:12:57.859 adrfam: ipv4 00:12:57.859 subtype: nvme subsystem 00:12:57.859 treq: not required 00:12:57.859 portid: 0 00:12:57.859 trsvcid: 4420 00:12:57.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:57.859 traddr: 10.0.0.2 00:12:57.859 eflags: none 00:12:57.859 sectype: none 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:57.859 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:57.860 13:58:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:59.770 13:58:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:01.720 /dev/nvme0n1 ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:01.720 13:58:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.981 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.981 rmmod nvme_tcp 00:13:02.241 rmmod nvme_fabrics 00:13:02.241 rmmod nvme_keyring 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1270673 ']' 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1270673 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1270673 ']' 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1270673 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1270673 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1270673' 00:13:02.241 killing process with pid 1270673 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1270673 00:13:02.241 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1270673 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.501 13:59:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.411 13:59:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.411 00:13:04.411 real 0m15.980s 00:13:04.411 user 0m23.847s 00:13:04.411 sys 0m6.605s 00:13:04.411 13:59:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.411 13:59:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.411 ************************************ 00:13:04.411 END TEST nvmf_nvme_cli 00:13:04.411 ************************************ 00:13:04.411 13:59:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.411 13:59:02 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:04.411 13:59:02 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:04.411 13:59:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.411 13:59:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.411 13:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.411 ************************************ 00:13:04.411 START TEST nvmf_vfio_user 00:13:04.411 ************************************ 00:13:04.411 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:04.671 * Looking for test storage... 00:13:04.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:04.671 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1272230 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1272230' 00:13:04.672 Process pid: 1272230 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1272230 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1272230 ']' 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.672 13:59:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:04.672 [2024-07-15 13:59:02.714846] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:04.672 [2024-07-15 13:59:02.714914] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.672 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.932 [2024-07-15 13:59:02.785128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.932 [2024-07-15 13:59:02.850534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.932 [2024-07-15 13:59:02.850569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.932 [2024-07-15 13:59:02.850577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.932 [2024-07-15 13:59:02.850583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.932 [2024-07-15 13:59:02.850588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.932 [2024-07-15 13:59:02.850728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.932 [2024-07-15 13:59:02.850846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.932 [2024-07-15 13:59:02.850946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.932 [2024-07-15 13:59:02.850947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.503 13:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.503 13:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:05.503 13:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:06.443 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:06.702 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:06.702 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:06.702 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:06.702 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:06.702 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:06.961 Malloc1 00:13:06.961 13:59:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:06.961 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:07.221 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:07.482 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:07.482 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:07.482 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:07.482 Malloc2 00:13:07.482 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:07.741 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:08.001 13:59:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:08.001 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:08.001 [2024-07-15 13:59:06.077482] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:08.001 [2024-07-15 13:59:06.077519] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272927 ] 00:13:08.001 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.001 [2024-07-15 13:59:06.110393] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:08.001 [2024-07-15 13:59:06.112684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:08.001 [2024-07-15 13:59:06.112703] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc8b4ca4000 00:13:08.263 [2024-07-15 13:59:06.116759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.117696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.118700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.119709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.120717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.121718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.122729] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.123732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.263 [2024-07-15 13:59:06.124748] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:08.263 [2024-07-15 13:59:06.124761] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc8b4c99000 00:13:08.263 [2024-07-15 13:59:06.126086] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:08.263 [2024-07-15 13:59:06.143014] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:08.263 [2024-07-15 13:59:06.143038] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:08.263 [2024-07-15 13:59:06.147880] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:08.263 [2024-07-15 13:59:06.147927] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:08.263 [2024-07-15 13:59:06.148015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:08.263 [2024-07-15 13:59:06.148035] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:08.263 [2024-07-15 13:59:06.148040] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:08.263 [2024-07-15 13:59:06.148881] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:08.263 [2024-07-15 13:59:06.148890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:08.263 [2024-07-15 13:59:06.148897] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:08.263 [2024-07-15 13:59:06.149878] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:08.263 [2024-07-15 13:59:06.149886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:08.263 [2024-07-15 13:59:06.149893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.150885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:08.263 [2024-07-15 13:59:06.150893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.151890] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:08.263 [2024-07-15 13:59:06.151899] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:08.263 [2024-07-15 13:59:06.151907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.151914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.152019] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:08.263 [2024-07-15 13:59:06.152024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.152029] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:08.263 [2024-07-15 13:59:06.152902] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:08.263 [2024-07-15 13:59:06.153902] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:08.263 [2024-07-15 13:59:06.154905] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:08.263 [2024-07-15 13:59:06.155907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.263 [2024-07-15 13:59:06.155969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:08.263 [2024-07-15 13:59:06.156916] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:08.263 [2024-07-15 13:59:06.156924] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:08.263 [2024-07-15 13:59:06.156929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:08.263 [2024-07-15 13:59:06.156950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:08.263 [2024-07-15 13:59:06.156957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:08.263 [2024-07-15 13:59:06.156973] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:08.263 [2024-07-15 13:59:06.156979] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:08.264 [2024-07-15 13:59:06.156992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157034] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:08.264 [2024-07-15 13:59:06.157041] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:08.264 [2024-07-15 13:59:06.157045] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:08.264 [2024-07-15 13:59:06.157050] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:08.264 [2024-07-15 13:59:06.157054] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:08.264 [2024-07-15 13:59:06.157059] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:08.264 [2024-07-15 13:59:06.157066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.264 [2024-07-15 13:59:06.157115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.264 [2024-07-15 13:59:06.157123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.264 [2024-07-15 13:59:06.157131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.264 [2024-07-15 13:59:06.157136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157171] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:08.264 [2024-07-15 13:59:06.157176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157280] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:08.264 [2024-07-15 13:59:06.157285] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:08.264 [2024-07-15 13:59:06.157291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157310] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:08.264 [2024-07-15 13:59:06.157323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157338] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:08.264 [2024-07-15 13:59:06.157342] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:08.264 [2024-07-15 13:59:06.157348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157389] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:08.264 [2024-07-15 13:59:06.157393] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:08.264 [2024-07-15 13:59:06.157399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157456] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:08.264 [2024-07-15 13:59:06.157461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:08.264 [2024-07-15 13:59:06.157466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:08.264 [2024-07-15 13:59:06.157484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157575] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:08.264 [2024-07-15 13:59:06.157579] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:08.264 [2024-07-15 13:59:06.157583] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:08.264 [2024-07-15 13:59:06.157586] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:08.264 [2024-07-15 13:59:06.157593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:08.264 [2024-07-15 13:59:06.157600] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:08.264 [2024-07-15 13:59:06.157604] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:08.264 [2024-07-15 13:59:06.157610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157617] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:08.264 [2024-07-15 13:59:06.157622] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:08.264 [2024-07-15 13:59:06.157628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157635] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:08.264 [2024-07-15 13:59:06.157640] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:08.264 [2024-07-15 13:59:06.157646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:08.264 [2024-07-15 13:59:06.157653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:08.264 [2024-07-15 13:59:06.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:08.264 ===================================================== 00:13:08.264 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:08.264 ===================================================== 00:13:08.264 Controller Capabilities/Features 00:13:08.264 ================================ 00:13:08.264 Vendor ID: 4e58 00:13:08.264 Subsystem Vendor ID: 4e58 00:13:08.264 Serial Number: SPDK1 00:13:08.264 Model Number: SPDK bdev Controller 00:13:08.264 Firmware Version: 24.09 00:13:08.264 Recommended Arb Burst: 6 00:13:08.264 IEEE OUI Identifier: 8d 6b 50 00:13:08.264 Multi-path I/O 00:13:08.264 May have multiple subsystem ports: Yes 00:13:08.264 May have multiple controllers: Yes 00:13:08.264 Associated with SR-IOV VF: No 00:13:08.264 Max Data Transfer Size: 131072 00:13:08.265 Max Number of Namespaces: 32 00:13:08.265 Max Number of I/O Queues: 127 00:13:08.265 NVMe Specification Version (VS): 1.3 00:13:08.265 NVMe Specification Version (Identify): 1.3 00:13:08.265 Maximum Queue Entries: 256 00:13:08.265 Contiguous Queues Required: Yes 00:13:08.265 Arbitration Mechanisms Supported 00:13:08.265 Weighted Round Robin: Not Supported 00:13:08.265 Vendor Specific: Not Supported 00:13:08.265 Reset Timeout: 15000 ms 00:13:08.265 Doorbell Stride: 4 bytes 00:13:08.265 NVM Subsystem Reset: Not Supported 00:13:08.265 Command Sets Supported 00:13:08.265 NVM Command Set: Supported 00:13:08.265 Boot Partition: Not Supported 00:13:08.265 Memory Page Size Minimum: 4096 bytes 00:13:08.265 Memory Page Size Maximum: 4096 bytes 00:13:08.265 Persistent Memory Region: Not Supported 00:13:08.265 Optional Asynchronous Events Supported 00:13:08.265 Namespace Attribute Notices: Supported 00:13:08.265 Firmware Activation Notices: Not Supported 00:13:08.265 ANA Change Notices: Not Supported 00:13:08.265 PLE Aggregate Log Change Notices: Not Supported 00:13:08.265 LBA Status Info Alert Notices: Not Supported 00:13:08.265 EGE Aggregate Log Change Notices: Not Supported 00:13:08.265 Normal NVM Subsystem Shutdown event: Not Supported 00:13:08.265 Zone Descriptor Change Notices: Not Supported 00:13:08.265 Discovery Log Change Notices: Not Supported 00:13:08.265 Controller Attributes 00:13:08.265 128-bit Host Identifier: Supported 00:13:08.265 Non-Operational Permissive Mode: Not Supported 00:13:08.265 NVM Sets: Not Supported 00:13:08.265 Read Recovery Levels: Not Supported 00:13:08.265 Endurance Groups: Not Supported 00:13:08.265 Predictable Latency Mode: Not Supported 00:13:08.265 Traffic Based Keep ALive: Not Supported 00:13:08.265 Namespace Granularity: Not Supported 00:13:08.265 SQ Associations: Not Supported 00:13:08.265 UUID List: Not Supported 00:13:08.265 Multi-Domain Subsystem: Not Supported 00:13:08.265 Fixed Capacity Management: Not Supported 00:13:08.265 Variable Capacity Management: Not Supported 00:13:08.265 Delete Endurance Group: Not Supported 00:13:08.265 Delete NVM Set: Not Supported 00:13:08.265 Extended LBA Formats Supported: Not Supported 00:13:08.265 Flexible Data Placement Supported: Not Supported 00:13:08.265 00:13:08.265 Controller Memory Buffer Support 00:13:08.265 ================================ 00:13:08.265 Supported: No 00:13:08.265 00:13:08.265 Persistent Memory Region Support 00:13:08.265 ================================ 00:13:08.265 Supported: No 00:13:08.265 00:13:08.265 Admin Command Set Attributes 00:13:08.265 ============================ 00:13:08.265 Security Send/Receive: Not Supported 00:13:08.265 Format NVM: Not Supported 00:13:08.265 Firmware Activate/Download: Not Supported 00:13:08.265 Namespace Management: Not Supported 00:13:08.265 Device Self-Test: Not Supported 00:13:08.265 Directives: Not Supported 00:13:08.265 NVMe-MI: Not Supported 00:13:08.265 Virtualization Management: Not Supported 00:13:08.265 Doorbell Buffer Config: Not Supported 00:13:08.265 Get LBA Status Capability: Not Supported 00:13:08.265 Command & Feature Lockdown Capability: Not Supported 00:13:08.265 Abort Command Limit: 4 00:13:08.265 Async Event Request Limit: 4 00:13:08.265 Number of Firmware Slots: N/A 00:13:08.265 Firmware Slot 1 Read-Only: N/A 00:13:08.265 Firmware Activation Without Reset: N/A 00:13:08.265 Multiple Update Detection Support: N/A 00:13:08.265 Firmware Update Granularity: No Information Provided 00:13:08.265 Per-Namespace SMART Log: No 00:13:08.265 Asymmetric Namespace Access Log Page: Not Supported 00:13:08.265 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:08.265 Command Effects Log Page: Supported 00:13:08.265 Get Log Page Extended Data: Supported 00:13:08.265 Telemetry Log Pages: Not Supported 00:13:08.265 Persistent Event Log Pages: Not Supported 00:13:08.265 Supported Log Pages Log Page: May Support 00:13:08.265 Commands Supported & Effects Log Page: Not Supported 00:13:08.265 Feature Identifiers & Effects Log Page:May Support 00:13:08.265 NVMe-MI Commands & Effects Log Page: May Support 00:13:08.265 Data Area 4 for Telemetry Log: Not Supported 00:13:08.265 Error Log Page Entries Supported: 128 00:13:08.265 Keep Alive: Supported 00:13:08.265 Keep Alive Granularity: 10000 ms 00:13:08.265 00:13:08.265 NVM Command Set Attributes 00:13:08.265 ========================== 00:13:08.265 Submission Queue Entry Size 00:13:08.265 Max: 64 00:13:08.265 Min: 64 00:13:08.265 Completion Queue Entry Size 00:13:08.265 Max: 16 00:13:08.265 Min: 16 00:13:08.265 Number of Namespaces: 32 00:13:08.265 Compare Command: Supported 00:13:08.265 Write Uncorrectable Command: Not Supported 00:13:08.265 Dataset Management Command: Supported 00:13:08.265 Write Zeroes Command: Supported 00:13:08.265 Set Features Save Field: Not Supported 00:13:08.265 Reservations: Not Supported 00:13:08.265 Timestamp: Not Supported 00:13:08.265 Copy: Supported 00:13:08.265 Volatile Write Cache: Present 00:13:08.265 Atomic Write Unit (Normal): 1 00:13:08.265 Atomic Write Unit (PFail): 1 00:13:08.265 Atomic Compare & Write Unit: 1 00:13:08.265 Fused Compare & Write: Supported 00:13:08.265 Scatter-Gather List 00:13:08.265 SGL Command Set: Supported (Dword aligned) 00:13:08.265 SGL Keyed: Not Supported 00:13:08.265 SGL Bit Bucket Descriptor: Not Supported 00:13:08.265 SGL Metadata Pointer: Not Supported 00:13:08.265 Oversized SGL: Not Supported 00:13:08.265 SGL Metadata Address: Not Supported 00:13:08.265 SGL Offset: Not Supported 00:13:08.265 Transport SGL Data Block: Not Supported 00:13:08.265 Replay Protected Memory Block: Not Supported 00:13:08.265 00:13:08.265 Firmware Slot Information 00:13:08.265 ========================= 00:13:08.265 Active slot: 1 00:13:08.265 Slot 1 Firmware Revision: 24.09 00:13:08.265 00:13:08.265 00:13:08.265 Commands Supported and Effects 00:13:08.265 ============================== 00:13:08.265 Admin Commands 00:13:08.265 -------------- 00:13:08.265 Get Log Page (02h): Supported 00:13:08.265 Identify (06h): Supported 00:13:08.265 Abort (08h): Supported 00:13:08.265 Set Features (09h): Supported 00:13:08.265 Get Features (0Ah): Supported 00:13:08.265 Asynchronous Event Request (0Ch): Supported 00:13:08.265 Keep Alive (18h): Supported 00:13:08.265 I/O Commands 00:13:08.265 ------------ 00:13:08.265 Flush (00h): Supported LBA-Change 00:13:08.265 Write (01h): Supported LBA-Change 00:13:08.265 Read (02h): Supported 00:13:08.265 Compare (05h): Supported 00:13:08.265 Write Zeroes (08h): Supported LBA-Change 00:13:08.265 Dataset Management (09h): Supported LBA-Change 00:13:08.265 Copy (19h): Supported LBA-Change 00:13:08.265 00:13:08.265 Error Log 00:13:08.265 ========= 00:13:08.265 00:13:08.265 Arbitration 00:13:08.265 =========== 00:13:08.265 Arbitration Burst: 1 00:13:08.265 00:13:08.265 Power Management 00:13:08.265 ================ 00:13:08.265 Number of Power States: 1 00:13:08.265 Current Power State: Power State #0 00:13:08.265 Power State #0: 00:13:08.265 Max Power: 0.00 W 00:13:08.265 Non-Operational State: Operational 00:13:08.265 Entry Latency: Not Reported 00:13:08.265 Exit Latency: Not Reported 00:13:08.265 Relative Read Throughput: 0 00:13:08.265 Relative Read Latency: 0 00:13:08.265 Relative Write Throughput: 0 00:13:08.265 Relative Write Latency: 0 00:13:08.265 Idle Power: Not Reported 00:13:08.265 Active Power: Not Reported 00:13:08.265 Non-Operational Permissive Mode: Not Supported 00:13:08.265 00:13:08.265 Health Information 00:13:08.265 ================== 00:13:08.265 Critical Warnings: 00:13:08.265 Available Spare Space: OK 00:13:08.265 Temperature: OK 00:13:08.265 Device Reliability: OK 00:13:08.265 Read Only: No 00:13:08.265 Volatile Memory Backup: OK 00:13:08.265 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:08.265 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:08.265 Available Spare: 0% 00:13:08.265 Available Sp[2024-07-15 13:59:06.157786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:08.265 [2024-07-15 13:59:06.157795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:08.265 [2024-07-15 13:59:06.157825] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:08.265 [2024-07-15 13:59:06.157834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.265 [2024-07-15 13:59:06.157841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.265 [2024-07-15 13:59:06.157847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.266 [2024-07-15 13:59:06.157855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.266 [2024-07-15 13:59:06.157921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:08.266 [2024-07-15 13:59:06.157930] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:08.266 [2024-07-15 13:59:06.158917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.266 [2024-07-15 13:59:06.158958] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:08.266 [2024-07-15 13:59:06.158964] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:08.266 [2024-07-15 13:59:06.159931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:08.266 [2024-07-15 13:59:06.159942] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:08.266 [2024-07-15 13:59:06.160004] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:08.266 [2024-07-15 13:59:06.163758] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:08.266 are Threshold: 0% 00:13:08.266 Life Percentage Used: 0% 00:13:08.266 Data Units Read: 0 00:13:08.266 Data Units Written: 0 00:13:08.266 Host Read Commands: 0 00:13:08.266 Host Write Commands: 0 00:13:08.266 Controller Busy Time: 0 minutes 00:13:08.266 Power Cycles: 0 00:13:08.266 Power On Hours: 0 hours 00:13:08.266 Unsafe Shutdowns: 0 00:13:08.266 Unrecoverable Media Errors: 0 00:13:08.266 Lifetime Error Log Entries: 0 00:13:08.266 Warning Temperature Time: 0 minutes 00:13:08.266 Critical Temperature Time: 0 minutes 00:13:08.266 00:13:08.266 Number of Queues 00:13:08.266 ================ 00:13:08.266 Number of I/O Submission Queues: 127 00:13:08.266 Number of I/O Completion Queues: 127 00:13:08.266 00:13:08.266 Active Namespaces 00:13:08.266 ================= 00:13:08.266 Namespace ID:1 00:13:08.266 Error Recovery Timeout: Unlimited 00:13:08.266 Command Set Identifier: NVM (00h) 00:13:08.266 Deallocate: Supported 00:13:08.266 Deallocated/Unwritten Error: Not Supported 00:13:08.266 Deallocated Read Value: Unknown 00:13:08.266 Deallocate in Write Zeroes: Not Supported 00:13:08.266 Deallocated Guard Field: 0xFFFF 00:13:08.266 Flush: Supported 00:13:08.266 Reservation: Supported 00:13:08.266 Namespace Sharing Capabilities: Multiple Controllers 00:13:08.266 Size (in LBAs): 131072 (0GiB) 00:13:08.266 Capacity (in LBAs): 131072 (0GiB) 00:13:08.266 Utilization (in LBAs): 131072 (0GiB) 00:13:08.266 NGUID: 784B19845D1C4FCAB1A31D27ABD5C8BE 00:13:08.266 UUID: 784b1984-5d1c-4fca-b1a3-1d27abd5c8be 00:13:08.266 Thin Provisioning: Not Supported 00:13:08.266 Per-NS Atomic Units: Yes 00:13:08.266 Atomic Boundary Size (Normal): 0 00:13:08.266 Atomic Boundary Size (PFail): 0 00:13:08.266 Atomic Boundary Offset: 0 00:13:08.266 Maximum Single Source Range Length: 65535 00:13:08.266 Maximum Copy Length: 65535 00:13:08.266 Maximum Source Range Count: 1 00:13:08.266 NGUID/EUI64 Never Reused: No 00:13:08.266 Namespace Write Protected: No 00:13:08.266 Number of LBA Formats: 1 00:13:08.266 Current LBA Format: LBA Format #00 00:13:08.266 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:08.266 00:13:08.266 13:59:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:08.266 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.266 [2024-07-15 13:59:06.347358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.549 Initializing NVMe Controllers 00:13:13.549 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.549 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:13.549 Initialization complete. Launching workers. 00:13:13.549 ======================================================== 00:13:13.549 Latency(us) 00:13:13.549 Device Information : IOPS MiB/s Average min max 00:13:13.549 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39953.12 156.07 3203.43 830.71 7356.40 00:13:13.549 ======================================================== 00:13:13.549 Total : 39953.12 156.07 3203.43 830.71 7356.40 00:13:13.549 00:13:13.549 [2024-07-15 13:59:11.364855] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.549 13:59:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:13.549 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.549 [2024-07-15 13:59:11.548732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.832 Initializing NVMe Controllers 00:13:18.832 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:18.832 Initialization complete. Launching workers. 00:13:18.832 ======================================================== 00:13:18.832 Latency(us) 00:13:18.832 Device Information : IOPS MiB/s Average min max 00:13:18.832 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7628.83 7986.88 00:13:18.832 ======================================================== 00:13:18.832 Total : 16051.20 62.70 7980.74 7628.83 7986.88 00:13:18.832 00:13:18.832 [2024-07-15 13:59:16.583790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.832 13:59:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:18.832 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.832 [2024-07-15 13:59:16.771650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.114 [2024-07-15 13:59:21.835932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.114 Initializing NVMe Controllers 00:13:24.114 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.114 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:24.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:24.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:24.114 Initialization complete. Launching workers. 00:13:24.114 Starting thread on core 2 00:13:24.114 Starting thread on core 3 00:13:24.114 Starting thread on core 1 00:13:24.114 13:59:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:24.114 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.114 [2024-07-15 13:59:22.101168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.409 [2024-07-15 13:59:25.165341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.409 Initializing NVMe Controllers 00:13:27.409 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:27.409 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:27.409 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:27.409 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:27.409 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:27.409 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:27.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:27.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:27.409 Initialization complete. Launching workers. 00:13:27.409 Starting thread on core 1 with urgent priority queue 00:13:27.409 Starting thread on core 2 with urgent priority queue 00:13:27.409 Starting thread on core 3 with urgent priority queue 00:13:27.409 Starting thread on core 0 with urgent priority queue 00:13:27.409 SPDK bdev Controller (SPDK1 ) core 0: 10479.00 IO/s 9.54 secs/100000 ios 00:13:27.409 SPDK bdev Controller (SPDK1 ) core 1: 16166.33 IO/s 6.19 secs/100000 ios 00:13:27.409 SPDK bdev Controller (SPDK1 ) core 2: 8869.00 IO/s 11.28 secs/100000 ios 00:13:27.409 SPDK bdev Controller (SPDK1 ) core 3: 14142.33 IO/s 7.07 secs/100000 ios 00:13:27.409 ======================================================== 00:13:27.409 00:13:27.409 13:59:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:27.409 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.409 [2024-07-15 13:59:25.441231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.409 Initializing NVMe Controllers 00:13:27.409 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:27.409 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:27.409 Namespace ID: 1 size: 0GB 00:13:27.409 Initialization complete. 00:13:27.409 INFO: using host memory buffer for IO 00:13:27.409 Hello world! 00:13:27.409 [2024-07-15 13:59:25.484494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.669 13:59:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:27.669 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.669 [2024-07-15 13:59:25.761190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.050 Initializing NVMe Controllers 00:13:29.050 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:29.050 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:29.050 Initialization complete. Launching workers. 00:13:29.051 submit (in ns) avg, min, max = 8168.0, 3914.2, 4000938.3 00:13:29.051 complete (in ns) avg, min, max = 18673.7, 2386.7, 5994388.3 00:13:29.051 00:13:29.051 Submit histogram 00:13:29.051 ================ 00:13:29.051 Range in us Cumulative Count 00:13:29.051 3.893 - 3.920: 0.0565% ( 11) 00:13:29.051 3.920 - 3.947: 4.3907% ( 844) 00:13:29.051 3.947 - 3.973: 12.4069% ( 1561) 00:13:29.051 3.973 - 4.000: 22.1640% ( 1900) 00:13:29.051 4.000 - 4.027: 33.8263% ( 2271) 00:13:29.051 4.027 - 4.053: 47.4400% ( 2651) 00:13:29.051 4.053 - 4.080: 63.8320% ( 3192) 00:13:29.051 4.080 - 4.107: 79.4844% ( 3048) 00:13:29.051 4.107 - 4.133: 90.4586% ( 2137) 00:13:29.051 4.133 - 4.160: 95.8712% ( 1054) 00:13:29.051 4.160 - 4.187: 98.2283% ( 459) 00:13:29.051 4.187 - 4.213: 99.0705% ( 164) 00:13:29.051 4.213 - 4.240: 99.3786% ( 60) 00:13:29.051 4.240 - 4.267: 99.4351% ( 11) 00:13:29.051 4.267 - 4.293: 99.4557% ( 4) 00:13:29.051 4.533 - 4.560: 99.4608% ( 1) 00:13:29.051 4.587 - 4.613: 99.4659% ( 1) 00:13:29.051 4.613 - 4.640: 99.4711% ( 1) 00:13:29.051 4.773 - 4.800: 99.4762% ( 1) 00:13:29.051 4.853 - 4.880: 99.4813% ( 1) 00:13:29.051 5.253 - 5.280: 99.4865% ( 1) 00:13:29.051 5.387 - 5.413: 99.4916% ( 1) 00:13:29.051 5.493 - 5.520: 99.4967% ( 1) 00:13:29.051 5.627 - 5.653: 99.5019% ( 1) 00:13:29.051 5.680 - 5.707: 99.5070% ( 1) 00:13:29.051 5.813 - 5.840: 99.5121% ( 1) 00:13:29.051 5.840 - 5.867: 99.5173% ( 1) 00:13:29.051 6.000 - 6.027: 99.5276% ( 2) 00:13:29.051 6.027 - 6.053: 99.5378% ( 2) 00:13:29.051 6.080 - 6.107: 99.5481% ( 2) 00:13:29.051 6.107 - 6.133: 99.5532% ( 1) 00:13:29.051 6.160 - 6.187: 99.5738% ( 4) 00:13:29.051 6.187 - 6.213: 99.5943% ( 4) 00:13:29.051 6.213 - 6.240: 99.5994% ( 1) 00:13:29.051 6.240 - 6.267: 99.6097% ( 2) 00:13:29.051 6.267 - 6.293: 99.6149% ( 1) 00:13:29.051 6.293 - 6.320: 99.6200% ( 1) 00:13:29.051 6.320 - 6.347: 99.6354% ( 3) 00:13:29.051 6.373 - 6.400: 99.6405% ( 1) 00:13:29.051 6.453 - 6.480: 99.6457% ( 1) 00:13:29.051 6.507 - 6.533: 99.6508% ( 1) 00:13:29.051 6.533 - 6.560: 99.6559% ( 1) 00:13:29.051 6.613 - 6.640: 99.6611% ( 1) 00:13:29.051 6.640 - 6.667: 99.6662% ( 1) 00:13:29.051 6.693 - 6.720: 99.6765% ( 2) 00:13:29.051 6.720 - 6.747: 99.6816% ( 1) 00:13:29.051 6.747 - 6.773: 99.6867% ( 1) 00:13:29.051 6.800 - 6.827: 99.7022% ( 3) 00:13:29.051 6.933 - 6.987: 99.7176% ( 3) 00:13:29.051 6.987 - 7.040: 99.7227% ( 1) 00:13:29.051 7.040 - 7.093: 99.7278% ( 1) 00:13:29.051 7.093 - 7.147: 99.7381% ( 2) 00:13:29.051 7.200 - 7.253: 99.7432% ( 1) 00:13:29.051 7.253 - 7.307: 99.7586% ( 3) 00:13:29.051 7.307 - 7.360: 99.7638% ( 1) 00:13:29.051 7.360 - 7.413: 99.7689% ( 1) 00:13:29.051 7.413 - 7.467: 99.7843% ( 3) 00:13:29.051 7.467 - 7.520: 99.7946% ( 2) 00:13:29.051 7.520 - 7.573: 99.7997% ( 1) 00:13:29.051 7.573 - 7.627: 99.8049% ( 1) 00:13:29.051 7.680 - 7.733: 99.8100% ( 1) 00:13:29.051 7.733 - 7.787: 99.8151% ( 1) 00:13:29.051 7.787 - 7.840: 99.8203% ( 1) 00:13:29.051 7.947 - 8.000: 99.8357% ( 3) 00:13:29.051 8.053 - 8.107: 99.8459% ( 2) 00:13:29.051 8.160 - 8.213: 99.8562% ( 2) 00:13:29.051 8.213 - 8.267: 99.8613% ( 1) 00:13:29.051 8.320 - 8.373: 99.8768% ( 3) 00:13:29.051 8.373 - 8.427: 99.8870% ( 2) 00:13:29.051 8.907 - 8.960: 99.8922% ( 1) 00:13:29.051 13.387 - 13.440: 99.8973% ( 1) 00:13:29.051 3986.773 - 4014.080: 100.0000% ( 20) 00:13:29.051 00:13:29.051 Complete histogram 00:13:29.051 ================== 00:13:29.051 Range in us Cumulative Count 00:13:29.051 2.387 - 2.400: 0.0154% ( 3) 00:13:29.051 2.400 - 2.413: 0.3954% ( 74) 00:13:29.051 2.413 - 2.427: 0.8114% ( 81) 00:13:29.051 2.427 - 2.440: 0.9860% ( 34) 00:13:29.051 2.440 - 2.453: 17.4190% ( 3200) 00:13:29.051 2.453 - [2024-07-15 13:59:26.783737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.051 2.467: 53.6589% ( 7057) 00:13:29.051 2.467 - 2.480: 64.6023% ( 2131) 00:13:29.051 2.480 - 2.493: 75.6535% ( 2152) 00:13:29.051 2.493 - 2.507: 80.8761% ( 1017) 00:13:29.051 2.507 - 2.520: 82.8891% ( 392) 00:13:29.051 2.520 - 2.533: 88.0963% ( 1014) 00:13:29.051 2.533 - 2.547: 93.3754% ( 1028) 00:13:29.051 2.547 - 2.560: 96.1280% ( 536) 00:13:29.051 2.560 - 2.573: 98.0794% ( 380) 00:13:29.051 2.573 - 2.587: 99.0294% ( 185) 00:13:29.051 2.587 - 2.600: 99.3427% ( 61) 00:13:29.051 2.600 - 2.613: 99.3530% ( 2) 00:13:29.051 2.613 - 2.627: 99.3684% ( 3) 00:13:29.051 2.627 - 2.640: 99.3735% ( 1) 00:13:29.051 2.680 - 2.693: 99.3786% ( 1) 00:13:29.051 4.400 - 4.427: 99.3838% ( 1) 00:13:29.051 4.667 - 4.693: 99.3889% ( 1) 00:13:29.051 4.693 - 4.720: 99.3992% ( 2) 00:13:29.051 4.747 - 4.773: 99.4043% ( 1) 00:13:29.051 4.800 - 4.827: 99.4094% ( 1) 00:13:29.051 4.827 - 4.853: 99.4197% ( 2) 00:13:29.051 4.880 - 4.907: 99.4248% ( 1) 00:13:29.051 4.907 - 4.933: 99.4351% ( 2) 00:13:29.051 4.933 - 4.960: 99.4403% ( 1) 00:13:29.051 4.987 - 5.013: 99.4454% ( 1) 00:13:29.051 5.227 - 5.253: 99.4557% ( 2) 00:13:29.051 5.467 - 5.493: 99.4608% ( 1) 00:13:29.051 5.493 - 5.520: 99.4659% ( 1) 00:13:29.051 5.573 - 5.600: 99.4711% ( 1) 00:13:29.052 5.627 - 5.653: 99.4762% ( 1) 00:13:29.052 5.760 - 5.787: 99.4865% ( 2) 00:13:29.052 5.813 - 5.840: 99.4916% ( 1) 00:13:29.052 5.867 - 5.893: 99.4967% ( 1) 00:13:29.052 5.920 - 5.947: 99.5019% ( 1) 00:13:29.052 6.027 - 6.053: 99.5070% ( 1) 00:13:29.052 6.053 - 6.080: 99.5173% ( 2) 00:13:29.052 6.080 - 6.107: 99.5224% ( 1) 00:13:29.052 6.133 - 6.160: 99.5276% ( 1) 00:13:29.052 6.213 - 6.240: 99.5327% ( 1) 00:13:29.052 6.267 - 6.293: 99.5378% ( 1) 00:13:29.052 6.347 - 6.373: 99.5430% ( 1) 00:13:29.052 6.400 - 6.427: 99.5481% ( 1) 00:13:29.052 6.533 - 6.560: 99.5532% ( 1) 00:13:29.052 6.587 - 6.613: 99.5584% ( 1) 00:13:29.052 6.667 - 6.693: 99.5635% ( 1) 00:13:29.052 6.747 - 6.773: 99.5686% ( 1) 00:13:29.052 7.147 - 7.200: 99.5738% ( 1) 00:13:29.052 7.200 - 7.253: 99.5789% ( 1) 00:13:29.052 11.467 - 11.520: 99.5840% ( 1) 00:13:29.052 13.120 - 13.173: 99.5892% ( 1) 00:13:29.052 14.400 - 14.507: 99.5943% ( 1) 00:13:29.052 2034.347 - 2048.000: 99.6097% ( 3) 00:13:29.052 3495.253 - 3522.560: 99.6149% ( 1) 00:13:29.052 3986.773 - 4014.080: 99.9846% ( 72) 00:13:29.052 5980.160 - 6007.467: 100.0000% ( 3) 00:13:29.052 00:13:29.052 13:59:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:29.052 13:59:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:29.052 13:59:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:29.052 13:59:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:29.052 13:59:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:29.052 [ 00:13:29.052 { 00:13:29.052 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:29.052 "subtype": "Discovery", 00:13:29.052 "listen_addresses": [], 00:13:29.052 "allow_any_host": true, 00:13:29.052 "hosts": [] 00:13:29.052 }, 00:13:29.052 { 00:13:29.052 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:29.052 "subtype": "NVMe", 00:13:29.052 "listen_addresses": [ 00:13:29.052 { 00:13:29.052 "trtype": "VFIOUSER", 00:13:29.052 "adrfam": "IPv4", 00:13:29.052 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:29.052 "trsvcid": "0" 00:13:29.052 } 00:13:29.052 ], 00:13:29.052 "allow_any_host": true, 00:13:29.052 "hosts": [], 00:13:29.052 "serial_number": "SPDK1", 00:13:29.052 "model_number": "SPDK bdev Controller", 00:13:29.052 "max_namespaces": 32, 00:13:29.052 "min_cntlid": 1, 00:13:29.052 "max_cntlid": 65519, 00:13:29.052 "namespaces": [ 00:13:29.052 { 00:13:29.052 "nsid": 1, 00:13:29.052 "bdev_name": "Malloc1", 00:13:29.052 "name": "Malloc1", 00:13:29.052 "nguid": "784B19845D1C4FCAB1A31D27ABD5C8BE", 00:13:29.052 "uuid": "784b1984-5d1c-4fca-b1a3-1d27abd5c8be" 00:13:29.052 } 00:13:29.052 ] 00:13:29.052 }, 00:13:29.052 { 00:13:29.052 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:29.052 "subtype": "NVMe", 00:13:29.052 "listen_addresses": [ 00:13:29.052 { 00:13:29.052 "trtype": "VFIOUSER", 00:13:29.052 "adrfam": "IPv4", 00:13:29.052 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:29.052 "trsvcid": "0" 00:13:29.052 } 00:13:29.052 ], 00:13:29.052 "allow_any_host": true, 00:13:29.052 "hosts": [], 00:13:29.052 "serial_number": "SPDK2", 00:13:29.052 "model_number": "SPDK bdev Controller", 00:13:29.052 "max_namespaces": 32, 00:13:29.052 "min_cntlid": 1, 00:13:29.052 "max_cntlid": 65519, 00:13:29.052 "namespaces": [ 00:13:29.052 { 00:13:29.052 "nsid": 1, 00:13:29.052 "bdev_name": "Malloc2", 00:13:29.052 "name": "Malloc2", 00:13:29.052 "nguid": "1F4ED1348EDD4FFBA38BCCD25C68D290", 00:13:29.052 "uuid": "1f4ed134-8edd-4ffb-a38b-ccd25c68d290" 00:13:29.052 } 00:13:29.052 ] 00:13:29.052 } 00:13:29.052 ] 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1277037 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:29.052 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:29.052 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.313 Malloc3 00:13:29.313 [2024-07-15 13:59:27.179220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.313 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:29.313 [2024-07-15 13:59:27.348334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.313 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:29.313 Asynchronous Event Request test 00:13:29.313 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:29.313 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:29.313 Registering asynchronous event callbacks... 00:13:29.314 Starting namespace attribute notice tests for all controllers... 00:13:29.314 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:29.314 aer_cb - Changed Namespace 00:13:29.314 Cleaning up... 00:13:29.575 [ 00:13:29.575 { 00:13:29.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:29.575 "subtype": "Discovery", 00:13:29.575 "listen_addresses": [], 00:13:29.575 "allow_any_host": true, 00:13:29.575 "hosts": [] 00:13:29.575 }, 00:13:29.575 { 00:13:29.575 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:29.575 "subtype": "NVMe", 00:13:29.575 "listen_addresses": [ 00:13:29.575 { 00:13:29.575 "trtype": "VFIOUSER", 00:13:29.575 "adrfam": "IPv4", 00:13:29.575 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:29.575 "trsvcid": "0" 00:13:29.575 } 00:13:29.575 ], 00:13:29.575 "allow_any_host": true, 00:13:29.575 "hosts": [], 00:13:29.575 "serial_number": "SPDK1", 00:13:29.575 "model_number": "SPDK bdev Controller", 00:13:29.575 "max_namespaces": 32, 00:13:29.575 "min_cntlid": 1, 00:13:29.575 "max_cntlid": 65519, 00:13:29.575 "namespaces": [ 00:13:29.575 { 00:13:29.575 "nsid": 1, 00:13:29.575 "bdev_name": "Malloc1", 00:13:29.575 "name": "Malloc1", 00:13:29.575 "nguid": "784B19845D1C4FCAB1A31D27ABD5C8BE", 00:13:29.575 "uuid": "784b1984-5d1c-4fca-b1a3-1d27abd5c8be" 00:13:29.575 }, 00:13:29.575 { 00:13:29.575 "nsid": 2, 00:13:29.575 "bdev_name": "Malloc3", 00:13:29.575 "name": "Malloc3", 00:13:29.575 "nguid": "5DB6E4433B0F4384B9FC34A2B5AF9487", 00:13:29.575 "uuid": "5db6e443-3b0f-4384-b9fc-34a2b5af9487" 00:13:29.575 } 00:13:29.575 ] 00:13:29.575 }, 00:13:29.575 { 00:13:29.575 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:29.575 "subtype": "NVMe", 00:13:29.575 "listen_addresses": [ 00:13:29.575 { 00:13:29.575 "trtype": "VFIOUSER", 00:13:29.575 "adrfam": "IPv4", 00:13:29.575 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:29.575 "trsvcid": "0" 00:13:29.575 } 00:13:29.575 ], 00:13:29.575 "allow_any_host": true, 00:13:29.575 "hosts": [], 00:13:29.575 "serial_number": "SPDK2", 00:13:29.575 "model_number": "SPDK bdev Controller", 00:13:29.575 "max_namespaces": 32, 00:13:29.575 "min_cntlid": 1, 00:13:29.575 "max_cntlid": 65519, 00:13:29.575 "namespaces": [ 00:13:29.575 { 00:13:29.575 "nsid": 1, 00:13:29.575 "bdev_name": "Malloc2", 00:13:29.575 "name": "Malloc2", 00:13:29.575 "nguid": "1F4ED1348EDD4FFBA38BCCD25C68D290", 00:13:29.575 "uuid": "1f4ed134-8edd-4ffb-a38b-ccd25c68d290" 00:13:29.575 } 00:13:29.575 ] 00:13:29.575 } 00:13:29.575 ] 00:13:29.575 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1277037 00:13:29.575 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.575 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:29.575 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:29.575 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:29.575 [2024-07-15 13:59:27.575103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:29.575 [2024-07-15 13:59:27.575172] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277285 ] 00:13:29.575 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.575 [2024-07-15 13:59:27.608314] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:29.575 [2024-07-15 13:59:27.617560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:29.575 [2024-07-15 13:59:27.617581] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe4ec748000 00:13:29.575 [2024-07-15 13:59:27.618562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.619566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.620576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.621583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.622587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.623595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.624602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.625610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.575 [2024-07-15 13:59:27.626617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:29.575 [2024-07-15 13:59:27.626627] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe4ec73d000 00:13:29.575 [2024-07-15 13:59:27.627952] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:29.575 [2024-07-15 13:59:27.648914] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:29.575 [2024-07-15 13:59:27.648937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:29.576 [2024-07-15 13:59:27.650993] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:29.576 [2024-07-15 13:59:27.651037] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:29.576 [2024-07-15 13:59:27.651115] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:29.576 [2024-07-15 13:59:27.651128] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:29.576 [2024-07-15 13:59:27.651134] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:29.576 [2024-07-15 13:59:27.651997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:29.576 [2024-07-15 13:59:27.652007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:29.576 [2024-07-15 13:59:27.652019] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:29.576 [2024-07-15 13:59:27.653002] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:29.576 [2024-07-15 13:59:27.653011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:29.576 [2024-07-15 13:59:27.653018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.654005] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:29.576 [2024-07-15 13:59:27.654014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.655018] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:29.576 [2024-07-15 13:59:27.655028] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:29.576 [2024-07-15 13:59:27.655033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.655040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.655145] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:29.576 [2024-07-15 13:59:27.655150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.655155] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:29.576 [2024-07-15 13:59:27.656027] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:29.576 [2024-07-15 13:59:27.657026] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:29.576 [2024-07-15 13:59:27.658035] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:29.576 [2024-07-15 13:59:27.659037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:29.576 [2024-07-15 13:59:27.659076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:29.576 [2024-07-15 13:59:27.660049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:29.576 [2024-07-15 13:59:27.660057] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:29.576 [2024-07-15 13:59:27.660062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.660083] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:29.576 [2024-07-15 13:59:27.660095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.660107] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.576 [2024-07-15 13:59:27.660112] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.576 [2024-07-15 13:59:27.660126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.576 [2024-07-15 13:59:27.666760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:29.576 [2024-07-15 13:59:27.666771] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:29.576 [2024-07-15 13:59:27.666779] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:29.576 [2024-07-15 13:59:27.666783] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:29.576 [2024-07-15 13:59:27.666788] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:29.576 [2024-07-15 13:59:27.666793] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:29.576 [2024-07-15 13:59:27.666797] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:29.576 [2024-07-15 13:59:27.666802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.666809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.666819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:29.576 [2024-07-15 13:59:27.674758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:29.576 [2024-07-15 13:59:27.674773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.576 [2024-07-15 13:59:27.674782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.576 [2024-07-15 13:59:27.674790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.576 [2024-07-15 13:59:27.674798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.576 [2024-07-15 13:59:27.674803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.674811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.674820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:29.576 [2024-07-15 13:59:27.682757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:29.576 [2024-07-15 13:59:27.682765] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:29.576 [2024-07-15 13:59:27.682770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.682777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.682782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:29.576 [2024-07-15 13:59:27.682791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:29.838 [2024-07-15 13:59:27.690757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:29.838 [2024-07-15 13:59:27.690821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:29.838 [2024-07-15 13:59:27.690829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:29.838 [2024-07-15 13:59:27.690836] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:29.839 [2024-07-15 13:59:27.690841] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:29.839 [2024-07-15 13:59:27.690847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.698756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.698766] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:29.839 [2024-07-15 13:59:27.698775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.698782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.698789] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.839 [2024-07-15 13:59:27.698794] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.839 [2024-07-15 13:59:27.698800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.706757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.706771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.706779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.706786] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.839 [2024-07-15 13:59:27.706790] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.839 [2024-07-15 13:59:27.706796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.714756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.714765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714802] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:29.839 [2024-07-15 13:59:27.714807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:29.839 [2024-07-15 13:59:27.714812] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:29.839 [2024-07-15 13:59:27.714829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.722757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.722771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.730757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.730770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.738756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.738769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.746756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.746774] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:29.839 [2024-07-15 13:59:27.746778] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:29.839 [2024-07-15 13:59:27.746782] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:29.839 [2024-07-15 13:59:27.746785] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:29.839 [2024-07-15 13:59:27.746792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:29.839 [2024-07-15 13:59:27.746799] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:29.839 [2024-07-15 13:59:27.746804] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:29.839 [2024-07-15 13:59:27.746810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.746817] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:29.839 [2024-07-15 13:59:27.746821] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.839 [2024-07-15 13:59:27.746827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.746834] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:29.839 [2024-07-15 13:59:27.746839] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:29.839 [2024-07-15 13:59:27.746844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:29.839 [2024-07-15 13:59:27.754758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.754775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.754785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:29.839 [2024-07-15 13:59:27.754792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:29.839 ===================================================== 00:13:29.839 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:29.839 ===================================================== 00:13:29.839 Controller Capabilities/Features 00:13:29.839 ================================ 00:13:29.839 Vendor ID: 4e58 00:13:29.839 Subsystem Vendor ID: 4e58 00:13:29.839 Serial Number: SPDK2 00:13:29.839 Model Number: SPDK bdev Controller 00:13:29.839 Firmware Version: 24.09 00:13:29.839 Recommended Arb Burst: 6 00:13:29.839 IEEE OUI Identifier: 8d 6b 50 00:13:29.839 Multi-path I/O 00:13:29.839 May have multiple subsystem ports: Yes 00:13:29.839 May have multiple controllers: Yes 00:13:29.839 Associated with SR-IOV VF: No 00:13:29.839 Max Data Transfer Size: 131072 00:13:29.839 Max Number of Namespaces: 32 00:13:29.839 Max Number of I/O Queues: 127 00:13:29.839 NVMe Specification Version (VS): 1.3 00:13:29.839 NVMe Specification Version (Identify): 1.3 00:13:29.839 Maximum Queue Entries: 256 00:13:29.839 Contiguous Queues Required: Yes 00:13:29.839 Arbitration Mechanisms Supported 00:13:29.839 Weighted Round Robin: Not Supported 00:13:29.839 Vendor Specific: Not Supported 00:13:29.839 Reset Timeout: 15000 ms 00:13:29.839 Doorbell Stride: 4 bytes 00:13:29.839 NVM Subsystem Reset: Not Supported 00:13:29.839 Command Sets Supported 00:13:29.839 NVM Command Set: Supported 00:13:29.839 Boot Partition: Not Supported 00:13:29.839 Memory Page Size Minimum: 4096 bytes 00:13:29.839 Memory Page Size Maximum: 4096 bytes 00:13:29.839 Persistent Memory Region: Not Supported 00:13:29.839 Optional Asynchronous Events Supported 00:13:29.839 Namespace Attribute Notices: Supported 00:13:29.839 Firmware Activation Notices: Not Supported 00:13:29.839 ANA Change Notices: Not Supported 00:13:29.839 PLE Aggregate Log Change Notices: Not Supported 00:13:29.839 LBA Status Info Alert Notices: Not Supported 00:13:29.839 EGE Aggregate Log Change Notices: Not Supported 00:13:29.839 Normal NVM Subsystem Shutdown event: Not Supported 00:13:29.839 Zone Descriptor Change Notices: Not Supported 00:13:29.839 Discovery Log Change Notices: Not Supported 00:13:29.839 Controller Attributes 00:13:29.839 128-bit Host Identifier: Supported 00:13:29.839 Non-Operational Permissive Mode: Not Supported 00:13:29.839 NVM Sets: Not Supported 00:13:29.839 Read Recovery Levels: Not Supported 00:13:29.839 Endurance Groups: Not Supported 00:13:29.839 Predictable Latency Mode: Not Supported 00:13:29.839 Traffic Based Keep ALive: Not Supported 00:13:29.839 Namespace Granularity: Not Supported 00:13:29.839 SQ Associations: Not Supported 00:13:29.839 UUID List: Not Supported 00:13:29.839 Multi-Domain Subsystem: Not Supported 00:13:29.839 Fixed Capacity Management: Not Supported 00:13:29.839 Variable Capacity Management: Not Supported 00:13:29.839 Delete Endurance Group: Not Supported 00:13:29.839 Delete NVM Set: Not Supported 00:13:29.839 Extended LBA Formats Supported: Not Supported 00:13:29.839 Flexible Data Placement Supported: Not Supported 00:13:29.839 00:13:29.839 Controller Memory Buffer Support 00:13:29.839 ================================ 00:13:29.839 Supported: No 00:13:29.839 00:13:29.839 Persistent Memory Region Support 00:13:29.839 ================================ 00:13:29.839 Supported: No 00:13:29.839 00:13:29.839 Admin Command Set Attributes 00:13:29.839 ============================ 00:13:29.839 Security Send/Receive: Not Supported 00:13:29.839 Format NVM: Not Supported 00:13:29.839 Firmware Activate/Download: Not Supported 00:13:29.839 Namespace Management: Not Supported 00:13:29.839 Device Self-Test: Not Supported 00:13:29.839 Directives: Not Supported 00:13:29.839 NVMe-MI: Not Supported 00:13:29.839 Virtualization Management: Not Supported 00:13:29.840 Doorbell Buffer Config: Not Supported 00:13:29.840 Get LBA Status Capability: Not Supported 00:13:29.840 Command & Feature Lockdown Capability: Not Supported 00:13:29.840 Abort Command Limit: 4 00:13:29.840 Async Event Request Limit: 4 00:13:29.840 Number of Firmware Slots: N/A 00:13:29.840 Firmware Slot 1 Read-Only: N/A 00:13:29.840 Firmware Activation Without Reset: N/A 00:13:29.840 Multiple Update Detection Support: N/A 00:13:29.840 Firmware Update Granularity: No Information Provided 00:13:29.840 Per-Namespace SMART Log: No 00:13:29.840 Asymmetric Namespace Access Log Page: Not Supported 00:13:29.840 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:29.840 Command Effects Log Page: Supported 00:13:29.840 Get Log Page Extended Data: Supported 00:13:29.840 Telemetry Log Pages: Not Supported 00:13:29.840 Persistent Event Log Pages: Not Supported 00:13:29.840 Supported Log Pages Log Page: May Support 00:13:29.840 Commands Supported & Effects Log Page: Not Supported 00:13:29.840 Feature Identifiers & Effects Log Page:May Support 00:13:29.840 NVMe-MI Commands & Effects Log Page: May Support 00:13:29.840 Data Area 4 for Telemetry Log: Not Supported 00:13:29.840 Error Log Page Entries Supported: 128 00:13:29.840 Keep Alive: Supported 00:13:29.840 Keep Alive Granularity: 10000 ms 00:13:29.840 00:13:29.840 NVM Command Set Attributes 00:13:29.840 ========================== 00:13:29.840 Submission Queue Entry Size 00:13:29.840 Max: 64 00:13:29.840 Min: 64 00:13:29.840 Completion Queue Entry Size 00:13:29.840 Max: 16 00:13:29.840 Min: 16 00:13:29.840 Number of Namespaces: 32 00:13:29.840 Compare Command: Supported 00:13:29.840 Write Uncorrectable Command: Not Supported 00:13:29.840 Dataset Management Command: Supported 00:13:29.840 Write Zeroes Command: Supported 00:13:29.840 Set Features Save Field: Not Supported 00:13:29.840 Reservations: Not Supported 00:13:29.840 Timestamp: Not Supported 00:13:29.840 Copy: Supported 00:13:29.840 Volatile Write Cache: Present 00:13:29.840 Atomic Write Unit (Normal): 1 00:13:29.840 Atomic Write Unit (PFail): 1 00:13:29.840 Atomic Compare & Write Unit: 1 00:13:29.840 Fused Compare & Write: Supported 00:13:29.840 Scatter-Gather List 00:13:29.840 SGL Command Set: Supported (Dword aligned) 00:13:29.840 SGL Keyed: Not Supported 00:13:29.840 SGL Bit Bucket Descriptor: Not Supported 00:13:29.840 SGL Metadata Pointer: Not Supported 00:13:29.840 Oversized SGL: Not Supported 00:13:29.840 SGL Metadata Address: Not Supported 00:13:29.840 SGL Offset: Not Supported 00:13:29.840 Transport SGL Data Block: Not Supported 00:13:29.840 Replay Protected Memory Block: Not Supported 00:13:29.840 00:13:29.840 Firmware Slot Information 00:13:29.840 ========================= 00:13:29.840 Active slot: 1 00:13:29.840 Slot 1 Firmware Revision: 24.09 00:13:29.840 00:13:29.840 00:13:29.840 Commands Supported and Effects 00:13:29.840 ============================== 00:13:29.840 Admin Commands 00:13:29.840 -------------- 00:13:29.840 Get Log Page (02h): Supported 00:13:29.840 Identify (06h): Supported 00:13:29.840 Abort (08h): Supported 00:13:29.840 Set Features (09h): Supported 00:13:29.840 Get Features (0Ah): Supported 00:13:29.840 Asynchronous Event Request (0Ch): Supported 00:13:29.840 Keep Alive (18h): Supported 00:13:29.840 I/O Commands 00:13:29.840 ------------ 00:13:29.840 Flush (00h): Supported LBA-Change 00:13:29.840 Write (01h): Supported LBA-Change 00:13:29.840 Read (02h): Supported 00:13:29.840 Compare (05h): Supported 00:13:29.840 Write Zeroes (08h): Supported LBA-Change 00:13:29.840 Dataset Management (09h): Supported LBA-Change 00:13:29.840 Copy (19h): Supported LBA-Change 00:13:29.840 00:13:29.840 Error Log 00:13:29.840 ========= 00:13:29.840 00:13:29.840 Arbitration 00:13:29.840 =========== 00:13:29.840 Arbitration Burst: 1 00:13:29.840 00:13:29.840 Power Management 00:13:29.840 ================ 00:13:29.840 Number of Power States: 1 00:13:29.840 Current Power State: Power State #0 00:13:29.840 Power State #0: 00:13:29.840 Max Power: 0.00 W 00:13:29.840 Non-Operational State: Operational 00:13:29.840 Entry Latency: Not Reported 00:13:29.840 Exit Latency: Not Reported 00:13:29.840 Relative Read Throughput: 0 00:13:29.840 Relative Read Latency: 0 00:13:29.840 Relative Write Throughput: 0 00:13:29.840 Relative Write Latency: 0 00:13:29.840 Idle Power: Not Reported 00:13:29.840 Active Power: Not Reported 00:13:29.840 Non-Operational Permissive Mode: Not Supported 00:13:29.840 00:13:29.840 Health Information 00:13:29.840 ================== 00:13:29.840 Critical Warnings: 00:13:29.840 Available Spare Space: OK 00:13:29.840 Temperature: OK 00:13:29.840 Device Reliability: OK 00:13:29.840 Read Only: No 00:13:29.840 Volatile Memory Backup: OK 00:13:29.840 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:29.840 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:29.840 Available Spare: 0% 00:13:29.840 Available Sp[2024-07-15 13:59:27.754889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:29.840 [2024-07-15 13:59:27.762758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:29.840 [2024-07-15 13:59:27.762790] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:29.840 [2024-07-15 13:59:27.762799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.840 [2024-07-15 13:59:27.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.840 [2024-07-15 13:59:27.762812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.840 [2024-07-15 13:59:27.762818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.840 [2024-07-15 13:59:27.762859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:29.840 [2024-07-15 13:59:27.762869] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:29.840 [2024-07-15 13:59:27.763863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:29.840 [2024-07-15 13:59:27.763911] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:29.840 [2024-07-15 13:59:27.763918] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:29.840 [2024-07-15 13:59:27.764862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:29.840 [2024-07-15 13:59:27.764874] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:29.840 [2024-07-15 13:59:27.764923] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:29.840 [2024-07-15 13:59:27.767757] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:29.840 are Threshold: 0% 00:13:29.840 Life Percentage Used: 0% 00:13:29.840 Data Units Read: 0 00:13:29.840 Data Units Written: 0 00:13:29.840 Host Read Commands: 0 00:13:29.840 Host Write Commands: 0 00:13:29.840 Controller Busy Time: 0 minutes 00:13:29.840 Power Cycles: 0 00:13:29.840 Power On Hours: 0 hours 00:13:29.840 Unsafe Shutdowns: 0 00:13:29.840 Unrecoverable Media Errors: 0 00:13:29.840 Lifetime Error Log Entries: 0 00:13:29.840 Warning Temperature Time: 0 minutes 00:13:29.840 Critical Temperature Time: 0 minutes 00:13:29.840 00:13:29.840 Number of Queues 00:13:29.840 ================ 00:13:29.840 Number of I/O Submission Queues: 127 00:13:29.840 Number of I/O Completion Queues: 127 00:13:29.840 00:13:29.840 Active Namespaces 00:13:29.840 ================= 00:13:29.840 Namespace ID:1 00:13:29.840 Error Recovery Timeout: Unlimited 00:13:29.840 Command Set Identifier: NVM (00h) 00:13:29.840 Deallocate: Supported 00:13:29.840 Deallocated/Unwritten Error: Not Supported 00:13:29.840 Deallocated Read Value: Unknown 00:13:29.840 Deallocate in Write Zeroes: Not Supported 00:13:29.840 Deallocated Guard Field: 0xFFFF 00:13:29.840 Flush: Supported 00:13:29.840 Reservation: Supported 00:13:29.840 Namespace Sharing Capabilities: Multiple Controllers 00:13:29.840 Size (in LBAs): 131072 (0GiB) 00:13:29.840 Capacity (in LBAs): 131072 (0GiB) 00:13:29.840 Utilization (in LBAs): 131072 (0GiB) 00:13:29.840 NGUID: 1F4ED1348EDD4FFBA38BCCD25C68D290 00:13:29.840 UUID: 1f4ed134-8edd-4ffb-a38b-ccd25c68d290 00:13:29.840 Thin Provisioning: Not Supported 00:13:29.840 Per-NS Atomic Units: Yes 00:13:29.840 Atomic Boundary Size (Normal): 0 00:13:29.840 Atomic Boundary Size (PFail): 0 00:13:29.840 Atomic Boundary Offset: 0 00:13:29.840 Maximum Single Source Range Length: 65535 00:13:29.840 Maximum Copy Length: 65535 00:13:29.840 Maximum Source Range Count: 1 00:13:29.840 NGUID/EUI64 Never Reused: No 00:13:29.840 Namespace Write Protected: No 00:13:29.840 Number of LBA Formats: 1 00:13:29.840 Current LBA Format: LBA Format #00 00:13:29.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:29.840 00:13:29.840 13:59:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:29.840 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.841 [2024-07-15 13:59:27.950729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.130 Initializing NVMe Controllers 00:13:35.130 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:35.130 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:35.130 Initialization complete. Launching workers. 00:13:35.130 ======================================================== 00:13:35.130 Latency(us) 00:13:35.130 Device Information : IOPS MiB/s Average min max 00:13:35.130 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39988.37 156.20 3200.80 832.79 6854.64 00:13:35.130 ======================================================== 00:13:35.130 Total : 39988.37 156.20 3200.80 832.79 6854.64 00:13:35.130 00:13:35.130 [2024-07-15 13:59:33.055944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.130 13:59:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:35.130 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.130 [2024-07-15 13:59:33.235474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.497 Initializing NVMe Controllers 00:13:40.497 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:40.497 Initialization complete. Launching workers. 00:13:40.497 ======================================================== 00:13:40.497 Latency(us) 00:13:40.497 Device Information : IOPS MiB/s Average min max 00:13:40.497 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36095.14 141.00 3545.80 1096.10 7618.55 00:13:40.497 ======================================================== 00:13:40.497 Total : 36095.14 141.00 3545.80 1096.10 7618.55 00:13:40.497 00:13:40.497 [2024-07-15 13:59:38.256521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.497 13:59:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:40.497 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.497 [2024-07-15 13:59:38.457681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.797 [2024-07-15 13:59:43.594839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.797 Initializing NVMe Controllers 00:13:45.797 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.797 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:45.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:45.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:45.797 Initialization complete. Launching workers. 00:13:45.797 Starting thread on core 2 00:13:45.797 Starting thread on core 3 00:13:45.797 Starting thread on core 1 00:13:45.797 13:59:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:45.797 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.797 [2024-07-15 13:59:43.859169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.108 [2024-07-15 13:59:46.916195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.108 Initializing NVMe Controllers 00:13:49.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:49.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:49.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:49.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:49.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:49.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:49.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:49.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:49.108 Initialization complete. Launching workers. 00:13:49.108 Starting thread on core 1 with urgent priority queue 00:13:49.108 Starting thread on core 2 with urgent priority queue 00:13:49.108 Starting thread on core 3 with urgent priority queue 00:13:49.108 Starting thread on core 0 with urgent priority queue 00:13:49.108 SPDK bdev Controller (SPDK2 ) core 0: 16116.67 IO/s 6.20 secs/100000 ios 00:13:49.108 SPDK bdev Controller (SPDK2 ) core 1: 9575.33 IO/s 10.44 secs/100000 ios 00:13:49.108 SPDK bdev Controller (SPDK2 ) core 2: 8384.00 IO/s 11.93 secs/100000 ios 00:13:49.108 SPDK bdev Controller (SPDK2 ) core 3: 12557.33 IO/s 7.96 secs/100000 ios 00:13:49.108 ======================================================== 00:13:49.108 00:13:49.108 13:59:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:49.108 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.108 [2024-07-15 13:59:47.192202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.108 Initializing NVMe Controllers 00:13:49.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:49.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:49.108 Namespace ID: 1 size: 0GB 00:13:49.108 Initialization complete. 00:13:49.108 INFO: using host memory buffer for IO 00:13:49.108 Hello world! 00:13:49.108 [2024-07-15 13:59:47.201274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.369 13:59:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:49.369 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.369 [2024-07-15 13:59:47.471043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.752 Initializing NVMe Controllers 00:13:50.752 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:50.752 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:50.752 Initialization complete. Launching workers. 00:13:50.752 submit (in ns) avg, min, max = 8350.7, 3896.7, 4000369.2 00:13:50.752 complete (in ns) avg, min, max = 16405.2, 2400.8, 4000481.7 00:13:50.752 00:13:50.752 Submit histogram 00:13:50.752 ================ 00:13:50.752 Range in us Cumulative Count 00:13:50.752 3.893 - 3.920: 1.1201% ( 219) 00:13:50.752 3.920 - 3.947: 5.3652% ( 830) 00:13:50.752 3.947 - 3.973: 13.1086% ( 1514) 00:13:50.752 3.973 - 4.000: 23.7214% ( 2075) 00:13:50.752 4.000 - 4.027: 35.8122% ( 2364) 00:13:50.752 4.027 - 4.053: 49.1919% ( 2616) 00:13:50.752 4.053 - 4.080: 66.0291% ( 3292) 00:13:50.752 4.080 - 4.107: 80.8255% ( 2893) 00:13:50.752 4.107 - 4.133: 91.0904% ( 2007) 00:13:50.752 4.133 - 4.160: 96.5119% ( 1060) 00:13:50.752 4.160 - 4.187: 98.4963% ( 388) 00:13:50.752 4.187 - 4.213: 99.2379% ( 145) 00:13:50.752 4.213 - 4.240: 99.4323% ( 38) 00:13:50.752 4.240 - 4.267: 99.4732% ( 8) 00:13:50.752 4.267 - 4.293: 99.4885% ( 3) 00:13:50.752 4.293 - 4.320: 99.4937% ( 1) 00:13:50.752 4.400 - 4.427: 99.4988% ( 1) 00:13:50.752 4.427 - 4.453: 99.5039% ( 1) 00:13:50.752 4.453 - 4.480: 99.5141% ( 2) 00:13:50.752 4.480 - 4.507: 99.5192% ( 1) 00:13:50.752 4.560 - 4.587: 99.5243% ( 1) 00:13:50.752 4.827 - 4.853: 99.5295% ( 1) 00:13:50.752 4.907 - 4.933: 99.5397% ( 2) 00:13:50.752 5.013 - 5.040: 99.5448% ( 1) 00:13:50.752 5.360 - 5.387: 99.5499% ( 1) 00:13:50.752 5.573 - 5.600: 99.5550% ( 1) 00:13:50.752 5.653 - 5.680: 99.5601% ( 1) 00:13:50.752 5.760 - 5.787: 99.5653% ( 1) 00:13:50.752 5.867 - 5.893: 99.5755% ( 2) 00:13:50.752 5.947 - 5.973: 99.5806% ( 1) 00:13:50.752 5.973 - 6.000: 99.5857% ( 1) 00:13:50.752 6.053 - 6.080: 99.5908% ( 1) 00:13:50.752 6.080 - 6.107: 99.6011% ( 2) 00:13:50.752 6.107 - 6.133: 99.6062% ( 1) 00:13:50.752 6.133 - 6.160: 99.6266% ( 4) 00:13:50.752 6.160 - 6.187: 99.6369% ( 2) 00:13:50.752 6.187 - 6.213: 99.6420% ( 1) 00:13:50.752 6.267 - 6.293: 99.6573% ( 3) 00:13:50.752 6.320 - 6.347: 99.6624% ( 1) 00:13:50.752 6.347 - 6.373: 99.6676% ( 1) 00:13:50.752 6.373 - 6.400: 99.6778% ( 2) 00:13:50.752 6.507 - 6.533: 99.6880% ( 2) 00:13:50.752 6.533 - 6.560: 99.6931% ( 1) 00:13:50.752 6.560 - 6.587: 99.6982% ( 1) 00:13:50.752 6.587 - 6.613: 99.7034% ( 1) 00:13:50.752 6.640 - 6.667: 99.7085% ( 1) 00:13:50.752 6.720 - 6.747: 99.7187% ( 2) 00:13:50.752 6.773 - 6.800: 99.7238% ( 1) 00:13:50.752 6.933 - 6.987: 99.7289% ( 1) 00:13:50.752 6.987 - 7.040: 99.7392% ( 2) 00:13:50.752 7.093 - 7.147: 99.7494% ( 2) 00:13:50.752 7.200 - 7.253: 99.7545% ( 1) 00:13:50.752 7.253 - 7.307: 99.7596% ( 1) 00:13:50.752 7.360 - 7.413: 99.7647% ( 1) 00:13:50.752 7.520 - 7.573: 99.7801% ( 3) 00:13:50.752 7.573 - 7.627: 99.7903% ( 2) 00:13:50.752 7.627 - 7.680: 99.7954% ( 1) 00:13:50.752 7.680 - 7.733: 99.8005% ( 1) 00:13:50.752 7.733 - 7.787: 99.8056% ( 1) 00:13:50.752 8.000 - 8.053: 99.8108% ( 1) 00:13:50.752 8.053 - 8.107: 99.8159% ( 1) 00:13:50.752 8.107 - 8.160: 99.8210% ( 1) 00:13:50.752 8.160 - 8.213: 99.8312% ( 2) 00:13:50.752 8.320 - 8.373: 99.8414% ( 2) 00:13:50.752 8.427 - 8.480: 99.8517% ( 2) 00:13:50.752 8.587 - 8.640: 99.8568% ( 1) 00:13:50.752 8.640 - 8.693: 99.8619% ( 1) 00:13:50.752 8.800 - 8.853: 99.8670% ( 1) 00:13:50.752 9.120 - 9.173: 99.8721% ( 1) 00:13:50.752 9.280 - 9.333: 99.8773% ( 1) 00:13:50.752 9.547 - 9.600: 99.8824% ( 1) 00:13:50.752 9.600 - 9.653: 99.8875% ( 1) 00:13:50.752 10.240 - 10.293: 99.8926% ( 1) 00:13:50.752 3986.773 - 4014.080: 100.0000% ( 21) 00:13:50.752 00:13:50.752 Complete histogram 00:13:50.752 ================== 00:13:50.752 Range in us Cumulative Count 00:13:50.752 2.400 - 2.413: 0.0563% ( 11) 00:13:50.752 2.413 - 2.427: 0.7979% ( 145) 00:13:50.752 2.427 - 2.440: 0.9002% ( 20) 00:13:50.753 2.440 - [2024-07-15 13:59:48.570422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.753 2.453: 1.0996% ( 39) 00:13:50.753 2.453 - 2.467: 1.1252% ( 5) 00:13:50.753 2.467 - 2.480: 37.0806% ( 7030) 00:13:50.753 2.480 - 2.493: 56.3830% ( 3774) 00:13:50.753 2.493 - 2.507: 68.6221% ( 2393) 00:13:50.753 2.507 - 2.520: 77.2453% ( 1686) 00:13:50.753 2.520 - 2.533: 80.9840% ( 731) 00:13:50.753 2.533 - 2.547: 83.3725% ( 467) 00:13:50.753 2.547 - 2.560: 88.6303% ( 1028) 00:13:50.753 2.560 - 2.573: 93.7960% ( 1010) 00:13:50.753 2.573 - 2.587: 96.5681% ( 542) 00:13:50.753 2.587 - 2.600: 98.1536% ( 310) 00:13:50.753 2.600 - 2.613: 99.0947% ( 184) 00:13:50.753 2.613 - 2.627: 99.3198% ( 44) 00:13:50.753 2.627 - 2.640: 99.3760% ( 11) 00:13:50.753 4.400 - 4.427: 99.3811% ( 1) 00:13:50.753 4.453 - 4.480: 99.3863% ( 1) 00:13:50.753 4.480 - 4.507: 99.3914% ( 1) 00:13:50.753 4.533 - 4.560: 99.3965% ( 1) 00:13:50.753 4.587 - 4.613: 99.4067% ( 2) 00:13:50.753 4.640 - 4.667: 99.4118% ( 1) 00:13:50.753 4.667 - 4.693: 99.4169% ( 1) 00:13:50.753 4.693 - 4.720: 99.4272% ( 2) 00:13:50.753 4.720 - 4.747: 99.4323% ( 1) 00:13:50.753 4.747 - 4.773: 99.4374% ( 1) 00:13:50.753 4.773 - 4.800: 99.4476% ( 2) 00:13:50.753 4.800 - 4.827: 99.4527% ( 1) 00:13:50.753 4.827 - 4.853: 99.4579% ( 1) 00:13:50.753 4.853 - 4.880: 99.4630% ( 1) 00:13:50.753 4.880 - 4.907: 99.4732% ( 2) 00:13:50.753 4.907 - 4.933: 99.4783% ( 1) 00:13:50.753 4.960 - 4.987: 99.4834% ( 1) 00:13:50.753 4.987 - 5.013: 99.4885% ( 1) 00:13:50.753 5.013 - 5.040: 99.4937% ( 1) 00:13:50.753 5.067 - 5.093: 99.5090% ( 3) 00:13:50.753 5.093 - 5.120: 99.5141% ( 1) 00:13:50.753 5.120 - 5.147: 99.5192% ( 1) 00:13:50.753 5.200 - 5.227: 99.5243% ( 1) 00:13:50.753 5.280 - 5.307: 99.5295% ( 1) 00:13:50.753 5.333 - 5.360: 99.5346% ( 1) 00:13:50.753 5.360 - 5.387: 99.5397% ( 1) 00:13:50.753 5.440 - 5.467: 99.5448% ( 1) 00:13:50.753 5.680 - 5.707: 99.5499% ( 1) 00:13:50.753 5.813 - 5.840: 99.5601% ( 2) 00:13:50.753 5.893 - 5.920: 99.5653% ( 1) 00:13:50.753 6.080 - 6.107: 99.5704% ( 1) 00:13:50.753 6.133 - 6.160: 99.5806% ( 2) 00:13:50.753 6.240 - 6.267: 99.5857% ( 1) 00:13:50.753 6.267 - 6.293: 99.5908% ( 1) 00:13:50.753 6.320 - 6.347: 99.5959% ( 1) 00:13:50.753 6.400 - 6.427: 99.6011% ( 1) 00:13:50.753 6.720 - 6.747: 99.6062% ( 1) 00:13:50.753 6.747 - 6.773: 99.6113% ( 1) 00:13:50.753 6.933 - 6.987: 99.6164% ( 1) 00:13:50.753 7.040 - 7.093: 99.6215% ( 1) 00:13:50.753 7.147 - 7.200: 99.6266% ( 1) 00:13:50.753 7.360 - 7.413: 99.6318% ( 1) 00:13:50.753 7.787 - 7.840: 99.6369% ( 1) 00:13:50.753 10.933 - 10.987: 99.6420% ( 1) 00:13:50.753 12.053 - 12.107: 99.6471% ( 1) 00:13:50.753 14.400 - 14.507: 99.6522% ( 1) 00:13:50.753 3986.773 - 4014.080: 100.0000% ( 68) 00:13:50.753 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:50.753 [ 00:13:50.753 { 00:13:50.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:50.753 "subtype": "Discovery", 00:13:50.753 "listen_addresses": [], 00:13:50.753 "allow_any_host": true, 00:13:50.753 "hosts": [] 00:13:50.753 }, 00:13:50.753 { 00:13:50.753 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:50.753 "subtype": "NVMe", 00:13:50.753 "listen_addresses": [ 00:13:50.753 { 00:13:50.753 "trtype": "VFIOUSER", 00:13:50.753 "adrfam": "IPv4", 00:13:50.753 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:50.753 "trsvcid": "0" 00:13:50.753 } 00:13:50.753 ], 00:13:50.753 "allow_any_host": true, 00:13:50.753 "hosts": [], 00:13:50.753 "serial_number": "SPDK1", 00:13:50.753 "model_number": "SPDK bdev Controller", 00:13:50.753 "max_namespaces": 32, 00:13:50.753 "min_cntlid": 1, 00:13:50.753 "max_cntlid": 65519, 00:13:50.753 "namespaces": [ 00:13:50.753 { 00:13:50.753 "nsid": 1, 00:13:50.753 "bdev_name": "Malloc1", 00:13:50.753 "name": "Malloc1", 00:13:50.753 "nguid": "784B19845D1C4FCAB1A31D27ABD5C8BE", 00:13:50.753 "uuid": "784b1984-5d1c-4fca-b1a3-1d27abd5c8be" 00:13:50.753 }, 00:13:50.753 { 00:13:50.753 "nsid": 2, 00:13:50.753 "bdev_name": "Malloc3", 00:13:50.753 "name": "Malloc3", 00:13:50.753 "nguid": "5DB6E4433B0F4384B9FC34A2B5AF9487", 00:13:50.753 "uuid": "5db6e443-3b0f-4384-b9fc-34a2b5af9487" 00:13:50.753 } 00:13:50.753 ] 00:13:50.753 }, 00:13:50.753 { 00:13:50.753 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:50.753 "subtype": "NVMe", 00:13:50.753 "listen_addresses": [ 00:13:50.753 { 00:13:50.753 "trtype": "VFIOUSER", 00:13:50.753 "adrfam": "IPv4", 00:13:50.753 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:50.753 "trsvcid": "0" 00:13:50.753 } 00:13:50.753 ], 00:13:50.753 "allow_any_host": true, 00:13:50.753 "hosts": [], 00:13:50.753 "serial_number": "SPDK2", 00:13:50.753 "model_number": "SPDK bdev Controller", 00:13:50.753 "max_namespaces": 32, 00:13:50.753 "min_cntlid": 1, 00:13:50.753 "max_cntlid": 65519, 00:13:50.753 "namespaces": [ 00:13:50.753 { 00:13:50.753 "nsid": 1, 00:13:50.753 "bdev_name": "Malloc2", 00:13:50.753 "name": "Malloc2", 00:13:50.753 "nguid": "1F4ED1348EDD4FFBA38BCCD25C68D290", 00:13:50.753 "uuid": "1f4ed134-8edd-4ffb-a38b-ccd25c68d290" 00:13:50.753 } 00:13:50.753 ] 00:13:50.753 } 00:13:50.753 ] 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1281322 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:50.753 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:50.753 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.013 Malloc4 00:13:51.013 [2024-07-15 13:59:48.961153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:51.013 13:59:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:51.013 [2024-07-15 13:59:49.113177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:51.272 Asynchronous Event Request test 00:13:51.272 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:51.272 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:51.272 Registering asynchronous event callbacks... 00:13:51.272 Starting namespace attribute notice tests for all controllers... 00:13:51.272 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:51.272 aer_cb - Changed Namespace 00:13:51.272 Cleaning up... 00:13:51.272 [ 00:13:51.272 { 00:13:51.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:51.272 "subtype": "Discovery", 00:13:51.272 "listen_addresses": [], 00:13:51.272 "allow_any_host": true, 00:13:51.272 "hosts": [] 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:51.272 "subtype": "NVMe", 00:13:51.272 "listen_addresses": [ 00:13:51.272 { 00:13:51.272 "trtype": "VFIOUSER", 00:13:51.272 "adrfam": "IPv4", 00:13:51.272 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:51.272 "trsvcid": "0" 00:13:51.272 } 00:13:51.272 ], 00:13:51.272 "allow_any_host": true, 00:13:51.272 "hosts": [], 00:13:51.272 "serial_number": "SPDK1", 00:13:51.272 "model_number": "SPDK bdev Controller", 00:13:51.272 "max_namespaces": 32, 00:13:51.272 "min_cntlid": 1, 00:13:51.272 "max_cntlid": 65519, 00:13:51.272 "namespaces": [ 00:13:51.272 { 00:13:51.272 "nsid": 1, 00:13:51.272 "bdev_name": "Malloc1", 00:13:51.272 "name": "Malloc1", 00:13:51.272 "nguid": "784B19845D1C4FCAB1A31D27ABD5C8BE", 00:13:51.272 "uuid": "784b1984-5d1c-4fca-b1a3-1d27abd5c8be" 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "nsid": 2, 00:13:51.272 "bdev_name": "Malloc3", 00:13:51.272 "name": "Malloc3", 00:13:51.272 "nguid": "5DB6E4433B0F4384B9FC34A2B5AF9487", 00:13:51.272 "uuid": "5db6e443-3b0f-4384-b9fc-34a2b5af9487" 00:13:51.272 } 00:13:51.272 ] 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:51.272 "subtype": "NVMe", 00:13:51.272 "listen_addresses": [ 00:13:51.272 { 00:13:51.272 "trtype": "VFIOUSER", 00:13:51.272 "adrfam": "IPv4", 00:13:51.272 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:51.272 "trsvcid": "0" 00:13:51.272 } 00:13:51.272 ], 00:13:51.272 "allow_any_host": true, 00:13:51.272 "hosts": [], 00:13:51.272 "serial_number": "SPDK2", 00:13:51.272 "model_number": "SPDK bdev Controller", 00:13:51.272 "max_namespaces": 32, 00:13:51.272 "min_cntlid": 1, 00:13:51.272 "max_cntlid": 65519, 00:13:51.272 "namespaces": [ 00:13:51.272 { 00:13:51.272 "nsid": 1, 00:13:51.272 "bdev_name": "Malloc2", 00:13:51.272 "name": "Malloc2", 00:13:51.272 "nguid": "1F4ED1348EDD4FFBA38BCCD25C68D290", 00:13:51.272 "uuid": "1f4ed134-8edd-4ffb-a38b-ccd25c68d290" 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "nsid": 2, 00:13:51.272 "bdev_name": "Malloc4", 00:13:51.272 "name": "Malloc4", 00:13:51.272 "nguid": "0F2017681540448C9065A945B322D1BC", 00:13:51.272 "uuid": "0f201768-1540-448c-9065-a945b322d1bc" 00:13:51.272 } 00:13:51.272 ] 00:13:51.272 } 00:13:51.272 ] 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1281322 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1272230 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1272230 ']' 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1272230 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1272230 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1272230' 00:13:51.272 killing process with pid 1272230 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1272230 00:13:51.272 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1272230 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1281605 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1281605' 00:13:51.531 Process pid: 1281605 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1281605 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1281605 ']' 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.531 13:59:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:51.531 [2024-07-15 13:59:49.595291] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:51.531 [2024-07-15 13:59:49.595975] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:51.531 [2024-07-15 13:59:49.596013] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.531 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.792 [2024-07-15 13:59:49.653582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.792 [2024-07-15 13:59:49.717389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.792 [2024-07-15 13:59:49.717424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.792 [2024-07-15 13:59:49.717432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.792 [2024-07-15 13:59:49.717438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.792 [2024-07-15 13:59:49.717444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.792 [2024-07-15 13:59:49.717581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.792 [2024-07-15 13:59:49.717706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.792 [2024-07-15 13:59:49.717856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.792 [2024-07-15 13:59:49.718024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.792 [2024-07-15 13:59:49.781704] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:51.792 [2024-07-15 13:59:49.781719] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:51.792 [2024-07-15 13:59:49.782889] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:51.792 [2024-07-15 13:59:49.783194] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:51.792 [2024-07-15 13:59:49.783278] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:52.361 13:59:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.361 13:59:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:52.361 13:59:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:53.302 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:53.562 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:53.562 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:53.562 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:53.562 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:53.562 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:53.822 Malloc1 00:13:53.822 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:53.822 13:59:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:54.081 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:54.340 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:54.340 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:54.340 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:54.340 Malloc2 00:13:54.340 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:54.599 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1281605 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1281605 ']' 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1281605 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:54.860 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281605 00:13:55.119 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.119 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.119 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281605' 00:13:55.119 killing process with pid 1281605 00:13:55.119 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1281605 00:13:55.119 13:59:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1281605 00:13:55.119 13:59:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:55.119 13:59:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:55.119 00:13:55.119 real 0m50.621s 00:13:55.119 user 3m20.658s 00:13:55.119 sys 0m3.043s 00:13:55.119 13:59:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.119 13:59:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:55.119 ************************************ 00:13:55.119 END TEST nvmf_vfio_user 00:13:55.119 ************************************ 00:13:55.119 13:59:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.119 13:59:53 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:55.119 13:59:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.119 13:59:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.119 13:59:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.119 ************************************ 00:13:55.119 START TEST nvmf_vfio_user_nvme_compliance 00:13:55.119 ************************************ 00:13:55.119 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:55.379 * Looking for test storage... 00:13:55.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1282404 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1282404' 00:13:55.379 Process pid: 1282404 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1282404 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1282404 ']' 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.379 13:59:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:55.379 [2024-07-15 13:59:53.405147] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:55.379 [2024-07-15 13:59:53.405237] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.379 [2024-07-15 13:59:53.476813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.639 [2024-07-15 13:59:53.541131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.639 [2024-07-15 13:59:53.541168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.639 [2024-07-15 13:59:53.541175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.639 [2024-07-15 13:59:53.541182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.639 [2024-07-15 13:59:53.541187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.639 [2024-07-15 13:59:53.541327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.639 [2024-07-15 13:59:53.541447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.639 [2024-07-15 13:59:53.541450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.209 13:59:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.209 13:59:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:56.209 13:59:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.151 malloc0 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.151 13:59:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:57.420 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.420 00:13:57.420 00:13:57.420 CUnit - A unit testing framework for C - Version 2.1-3 00:13:57.420 http://cunit.sourceforge.net/ 00:13:57.420 00:13:57.420 00:13:57.420 Suite: nvme_compliance 00:13:57.420 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 13:59:55.442223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.420 [2024-07-15 13:59:55.443566] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:57.420 [2024-07-15 13:59:55.443576] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:57.420 [2024-07-15 13:59:55.443581] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:57.420 [2024-07-15 13:59:55.445238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.420 passed 00:13:57.680 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 13:59:55.538779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.680 [2024-07-15 13:59:55.541797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.680 passed 00:13:57.680 Test: admin_identify_ns ...[2024-07-15 13:59:55.638001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.680 [2024-07-15 13:59:55.697766] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:57.680 [2024-07-15 13:59:55.705765] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:57.680 [2024-07-15 13:59:55.726883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.680 passed 00:13:57.940 Test: admin_get_features_mandatory_features ...[2024-07-15 13:59:55.821901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.940 [2024-07-15 13:59:55.824916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.940 passed 00:13:57.940 Test: admin_get_features_optional_features ...[2024-07-15 13:59:55.919444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.940 [2024-07-15 13:59:55.922460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.940 passed 00:13:57.940 Test: admin_set_features_number_of_queues ...[2024-07-15 13:59:56.014573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.201 [2024-07-15 13:59:56.119867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.201 passed 00:13:58.201 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 13:59:56.212525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.201 [2024-07-15 13:59:56.215542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.201 passed 00:13:58.201 Test: admin_get_log_page_with_lpo ...[2024-07-15 13:59:56.308680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.461 [2024-07-15 13:59:56.373765] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:58.461 [2024-07-15 13:59:56.386804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.461 passed 00:13:58.461 Test: fabric_property_get ...[2024-07-15 13:59:56.480858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.461 [2024-07-15 13:59:56.482105] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:58.461 [2024-07-15 13:59:56.483882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.461 passed 00:13:58.721 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 13:59:56.580437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.721 [2024-07-15 13:59:56.581708] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:58.721 [2024-07-15 13:59:56.583462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.721 passed 00:13:58.721 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 13:59:56.675007] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.721 [2024-07-15 13:59:56.758759] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:58.721 [2024-07-15 13:59:56.774760] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:58.721 [2024-07-15 13:59:56.779839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.721 passed 00:13:58.981 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 13:59:56.873847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.981 [2024-07-15 13:59:56.875083] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:58.981 [2024-07-15 13:59:56.876862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.981 passed 00:13:58.981 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 13:59:56.970009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:58.981 [2024-07-15 13:59:57.045760] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:58.981 [2024-07-15 13:59:57.069759] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:58.981 [2024-07-15 13:59:57.074847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:59.240 passed 00:13:59.240 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 13:59:57.168864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:59.240 [2024-07-15 13:59:57.170105] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:59.240 [2024-07-15 13:59:57.170125] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:59.241 [2024-07-15 13:59:57.171882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:59.241 passed 00:13:59.241 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 13:59:57.264979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:59.500 [2024-07-15 13:59:57.356762] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:59.500 [2024-07-15 13:59:57.364760] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:59.500 [2024-07-15 13:59:57.372763] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:59.500 [2024-07-15 13:59:57.380769] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:59.500 [2024-07-15 13:59:57.409841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:59.500 passed 00:13:59.500 Test: admin_create_io_sq_verify_pc ...[2024-07-15 13:59:57.503842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:59.500 [2024-07-15 13:59:57.519769] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:59.500 [2024-07-15 13:59:57.536995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:59.500 passed 00:13:59.760 Test: admin_create_io_qp_max_qps ...[2024-07-15 13:59:57.631536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:00.700 [2024-07-15 13:59:58.736763] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:01.268 [2024-07-15 13:59:59.119836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:01.268 passed 00:14:01.269 Test: admin_create_io_sq_shared_cq ...[2024-07-15 13:59:59.212016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:01.269 [2024-07-15 13:59:59.343759] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:01.269 [2024-07-15 13:59:59.380815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:01.529 passed 00:14:01.529 00:14:01.529 Run Summary: Type Total Ran Passed Failed Inactive 00:14:01.529 suites 1 1 n/a 0 0 00:14:01.529 tests 18 18 18 0 0 00:14:01.529 asserts 360 360 360 0 n/a 00:14:01.529 00:14:01.529 Elapsed time = 1.650 seconds 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1282404 ']' 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1282404' 00:14:01.529 killing process with pid 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1282404 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:01.529 00:14:01.529 real 0m6.423s 00:14:01.529 user 0m18.365s 00:14:01.529 sys 0m0.485s 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.529 13:59:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:01.529 ************************************ 00:14:01.529 END TEST nvmf_vfio_user_nvme_compliance 00:14:01.529 ************************************ 00:14:01.789 13:59:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:01.789 13:59:59 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:01.789 13:59:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.789 13:59:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.789 13:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.789 ************************************ 00:14:01.789 START TEST nvmf_vfio_user_fuzz 00:14:01.789 ************************************ 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:01.789 * Looking for test storage... 00:14:01.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.789 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1283747 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1283747' 00:14:01.790 Process pid: 1283747 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1283747 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1283747 ']' 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.790 13:59:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:02.729 14:00:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.729 14:00:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:02.729 14:00:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 malloc0 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.669 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:03.670 14:00:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:35.890 Fuzzing completed. Shutting down the fuzz application 00:14:35.890 00:14:35.890 Dumping successful admin opcodes: 00:14:35.890 8, 9, 10, 24, 00:14:35.890 Dumping successful io opcodes: 00:14:35.890 0, 00:14:35.890 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1124372, total successful commands: 4424, random_seed: 3332562752 00:14:35.890 NS: 0x200003a1ef00 admin qp, Total commands completed: 141484, total successful commands: 1148, random_seed: 3332654976 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1283747 ']' 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1283747' 00:14:35.890 killing process with pid 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1283747 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:35.890 00:14:35.890 real 0m33.613s 00:14:35.890 user 0m37.655s 00:14:35.890 sys 0m25.920s 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.890 14:00:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.890 ************************************ 00:14:35.890 END TEST nvmf_vfio_user_fuzz 00:14:35.890 ************************************ 00:14:35.890 14:00:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.890 14:00:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:35.890 14:00:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.891 14:00:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.891 14:00:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.891 ************************************ 00:14:35.891 START TEST nvmf_host_management 00:14:35.891 ************************************ 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:35.891 * Looking for test storage... 00:14:35.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.891 14:00:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:44.027 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:44.027 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:44.027 Found net devices under 0000:31:00.0: cvl_0_0 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:44.027 Found net devices under 0000:31:00.1: cvl_0_1 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.027 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:44.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:14:44.027 00:14:44.028 --- 10.0.0.2 ping statistics --- 00:14:44.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.028 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:14:44.028 00:14:44.028 --- 10.0.0.1 ping statistics --- 00:14:44.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.028 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1295018 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1295018 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1295018 ']' 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.028 14:00:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 [2024-07-15 14:00:41.795893] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:44.028 [2024-07-15 14:00:41.795959] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.028 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.028 [2024-07-15 14:00:41.893526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.028 [2024-07-15 14:00:41.990332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.028 [2024-07-15 14:00:41.990395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.028 [2024-07-15 14:00:41.990404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.028 [2024-07-15 14:00:41.990411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.028 [2024-07-15 14:00:41.990417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.028 [2024-07-15 14:00:41.990550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.028 [2024-07-15 14:00:41.990690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.028 [2024-07-15 14:00:41.990826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:44.028 [2024-07-15 14:00:41.990856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.597 [2024-07-15 14:00:42.618316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.597 Malloc0 00:14:44.597 [2024-07-15 14:00:42.677452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.597 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1295257 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1295257 /var/tmp/bdevperf.sock 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1295257 ']' 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:44.857 { 00:14:44.857 "params": { 00:14:44.857 "name": "Nvme$subsystem", 00:14:44.857 "trtype": "$TEST_TRANSPORT", 00:14:44.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.857 "adrfam": "ipv4", 00:14:44.857 "trsvcid": "$NVMF_PORT", 00:14:44.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.857 "hdgst": ${hdgst:-false}, 00:14:44.857 "ddgst": ${ddgst:-false} 00:14:44.857 }, 00:14:44.857 "method": "bdev_nvme_attach_controller" 00:14:44.857 } 00:14:44.857 EOF 00:14:44.857 )") 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:44.857 14:00:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:44.857 "params": { 00:14:44.857 "name": "Nvme0", 00:14:44.857 "trtype": "tcp", 00:14:44.857 "traddr": "10.0.0.2", 00:14:44.857 "adrfam": "ipv4", 00:14:44.857 "trsvcid": "4420", 00:14:44.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:44.857 "hdgst": false, 00:14:44.857 "ddgst": false 00:14:44.857 }, 00:14:44.857 "method": "bdev_nvme_attach_controller" 00:14:44.857 }' 00:14:44.857 [2024-07-15 14:00:42.773208] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:44.857 [2024-07-15 14:00:42.773259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295257 ] 00:14:44.857 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.857 [2024-07-15 14:00:42.838889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.857 [2024-07-15 14:00:42.903657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.116 Running I/O for 10 seconds... 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.687 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:45.687 [2024-07-15 14:00:43.624517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.624973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70fe20 is same with the state(5) to be set 00:14:45.687 [2024-07-15 14:00:43.625539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.625985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.687 [2024-07-15 14:00:43.626159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.687 [2024-07-15 14:00:43.626170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.688 [2024-07-15 14:00:43.626669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f850 is same with the state(5) to be set 00:14:45.688 [2024-07-15 14:00:43.626720] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x101f850 was disconnected and freed. reset controller. 00:14:45.688 [2024-07-15 14:00:43.626765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.688 [2024-07-15 14:00:43.626775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.688 [2024-07-15 14:00:43.626791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.688 [2024-07-15 14:00:43.626805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.688 [2024-07-15 14:00:43.626822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.688 [2024-07-15 14:00:43.626829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e540 is same with the state(5) to be set 00:14:45.688 [2024-07-15 14:00:43.628048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.688 task offset: 106496 on job bdev=Nvme0n1 fails 00:14:45.688 00:14:45.688 Latency(us) 00:14:45.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.688 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:45.688 Job: Nvme0n1 ended in about 0.57 seconds with error 00:14:45.688 Verification LBA range: start 0x0 length 0x400 00:14:45.688 Nvme0n1 : 0.57 1447.57 90.47 111.35 0.00 40089.55 6253.23 33423.36 00:14:45.688 =================================================================================================================== 00:14:45.688 Total : 1447.57 90.47 111.35 0.00 40089.55 6253.23 33423.36 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:45.688 [2024-07-15 14:00:43.630032] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:45.688 [2024-07-15 14:00:43.630053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e540 (9): Bad file descriptor 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.688 14:00:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:45.688 [2024-07-15 14:00:43.651329] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1295257 00:14:46.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1295257) - No such process 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:46.629 { 00:14:46.629 "params": { 00:14:46.629 "name": "Nvme$subsystem", 00:14:46.629 "trtype": "$TEST_TRANSPORT", 00:14:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:46.629 "adrfam": "ipv4", 00:14:46.629 "trsvcid": "$NVMF_PORT", 00:14:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:46.629 "hdgst": ${hdgst:-false}, 00:14:46.629 "ddgst": ${ddgst:-false} 00:14:46.629 }, 00:14:46.629 "method": "bdev_nvme_attach_controller" 00:14:46.629 } 00:14:46.629 EOF 00:14:46.629 )") 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:46.629 14:00:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:46.629 "params": { 00:14:46.629 "name": "Nvme0", 00:14:46.629 "trtype": "tcp", 00:14:46.629 "traddr": "10.0.0.2", 00:14:46.629 "adrfam": "ipv4", 00:14:46.629 "trsvcid": "4420", 00:14:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:46.629 "hdgst": false, 00:14:46.629 "ddgst": false 00:14:46.629 }, 00:14:46.629 "method": "bdev_nvme_attach_controller" 00:14:46.629 }' 00:14:46.629 [2024-07-15 14:00:44.698435] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:46.629 [2024-07-15 14:00:44.698490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295719 ] 00:14:46.629 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.890 [2024-07-15 14:00:44.763409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.890 [2024-07-15 14:00:44.827257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.149 Running I/O for 1 seconds... 00:14:48.090 00:14:48.090 Latency(us) 00:14:48.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.090 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:48.090 Verification LBA range: start 0x0 length 0x400 00:14:48.090 Nvme0n1 : 1.00 1659.60 103.73 0.00 0.00 37890.87 6171.31 32986.45 00:14:48.090 =================================================================================================================== 00:14:48.090 Total : 1659.60 103.73 0.00 0.00 37890.87 6171.31 32986.45 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.351 rmmod nvme_tcp 00:14:48.351 rmmod nvme_fabrics 00:14:48.351 rmmod nvme_keyring 00:14:48.351 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1295018 ']' 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1295018 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1295018 ']' 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1295018 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1295018 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1295018' 00:14:48.352 killing process with pid 1295018 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1295018 00:14:48.352 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1295018 00:14:48.611 [2024-07-15 14:00:46.512034] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.611 14:00:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.532 14:00:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.532 14:00:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:50.532 00:14:50.532 real 0m15.208s 00:14:50.532 user 0m23.024s 00:14:50.532 sys 0m7.073s 00:14:50.532 14:00:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.532 14:00:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:50.532 ************************************ 00:14:50.532 END TEST nvmf_host_management 00:14:50.532 ************************************ 00:14:50.793 14:00:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:50.793 14:00:48 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:50.793 14:00:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:50.793 14:00:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.793 14:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.793 ************************************ 00:14:50.793 START TEST nvmf_lvol 00:14:50.793 ************************************ 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:50.793 * Looking for test storage... 00:14:50.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.793 14:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:58.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:58.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:58.931 Found net devices under 0000:31:00.0: cvl_0_0 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:58.931 Found net devices under 0000:31:00.1: cvl_0_1 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.931 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:58.932 00:14:58.932 --- 10.0.0.2 ping statistics --- 00:14:58.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.932 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:58.932 00:14:58.932 --- 10.0.0.1 ping statistics --- 00:14:58.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.932 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1300757 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1300757 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1300757 ']' 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.932 14:00:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:58.932 [2024-07-15 14:00:57.042688] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:58.932 [2024-07-15 14:00:57.042749] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.192 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.193 [2024-07-15 14:00:57.122306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.193 [2024-07-15 14:00:57.196955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.193 [2024-07-15 14:00:57.196990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.193 [2024-07-15 14:00:57.196998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.193 [2024-07-15 14:00:57.197004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.193 [2024-07-15 14:00:57.197010] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.193 [2024-07-15 14:00:57.197147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.193 [2024-07-15 14:00:57.197267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.193 [2024-07-15 14:00:57.197270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.762 14:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.762 14:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:59.762 14:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.762 14:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.762 14:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:59.763 14:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.763 14:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.022 [2024-07-15 14:00:58.005195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.022 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.281 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:00.281 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.281 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:00.281 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:00.541 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:00.801 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=91b81433-531b-40d1-8189-10cf4730b8d0 00:15:00.801 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91b81433-531b-40d1-8189-10cf4730b8d0 lvol 20 00:15:00.801 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e37c0626-96c6-45b0-8b5f-f5235558be73 00:15:00.801 14:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:01.060 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e37c0626-96c6-45b0-8b5f-f5235558be73 00:15:01.319 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:01.319 [2024-07-15 14:00:59.396292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.319 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.578 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1301152 00:15:01.578 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:01.578 14:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:01.578 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.517 14:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e37c0626-96c6-45b0-8b5f-f5235558be73 MY_SNAPSHOT 00:15:02.777 14:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b7450382-595e-4fc5-9132-87dee0a36ca7 00:15:02.777 14:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e37c0626-96c6-45b0-8b5f-f5235558be73 30 00:15:03.044 14:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b7450382-595e-4fc5-9132-87dee0a36ca7 MY_CLONE 00:15:03.303 14:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=77dc671d-19a6-42b5-8f26-1f692bbd6888 00:15:03.304 14:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 77dc671d-19a6-42b5-8f26-1f692bbd6888 00:15:03.873 14:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1301152 00:15:12.050 Initializing NVMe Controllers 00:15:12.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:12.050 Controller IO queue size 128, less than required. 00:15:12.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:12.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:12.050 Initialization complete. Launching workers. 00:15:12.050 ======================================================== 00:15:12.050 Latency(us) 00:15:12.050 Device Information : IOPS MiB/s Average min max 00:15:12.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12516.50 48.89 10228.98 1451.11 53978.80 00:15:12.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18284.20 71.42 7000.88 1364.62 40246.93 00:15:12.050 ======================================================== 00:15:12.050 Total : 30800.70 120.32 8312.69 1364.62 53978.80 00:15:12.050 00:15:12.050 14:01:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:12.050 14:01:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e37c0626-96c6-45b0-8b5f-f5235558be73 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91b81433-531b-40d1-8189-10cf4730b8d0 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.310 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.310 rmmod nvme_tcp 00:15:12.571 rmmod nvme_fabrics 00:15:12.571 rmmod nvme_keyring 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1300757 ']' 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1300757 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1300757 ']' 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1300757 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300757 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300757' 00:15:12.571 killing process with pid 1300757 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1300757 00:15:12.571 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1300757 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.831 14:01:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.744 14:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.744 00:15:14.744 real 0m24.085s 00:15:14.744 user 1m3.864s 00:15:14.744 sys 0m8.328s 00:15:14.744 14:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.744 14:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:14.744 ************************************ 00:15:14.744 END TEST nvmf_lvol 00:15:14.744 ************************************ 00:15:14.744 14:01:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:14.744 14:01:12 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:14.744 14:01:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:14.744 14:01:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.744 14:01:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.744 ************************************ 00:15:14.744 START TEST nvmf_lvs_grow 00:15:14.744 ************************************ 00:15:14.744 14:01:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:15.005 * Looking for test storage... 00:15:15.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.005 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.006 14:01:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:23.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:23.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:23.157 Found net devices under 0000:31:00.0: cvl_0_0 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:23.157 Found net devices under 0000:31:00.1: cvl_0_1 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.157 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:15:23.158 00:15:23.158 --- 10.0.0.2 ping statistics --- 00:15:23.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.158 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:15:23.158 00:15:23.158 --- 10.0.0.1 ping statistics --- 00:15:23.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.158 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1308131 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1308131 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1308131 ']' 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.158 14:01:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.158 [2024-07-15 14:01:20.991312] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:23.158 [2024-07-15 14:01:20.991368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.158 [2024-07-15 14:01:21.069390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.158 [2024-07-15 14:01:21.138501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.158 [2024-07-15 14:01:21.138541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.158 [2024-07-15 14:01:21.138548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.158 [2024-07-15 14:01:21.138555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.158 [2024-07-15 14:01:21.138561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.158 [2024-07-15 14:01:21.138581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.730 14:01:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:23.991 [2024-07-15 14:01:21.949188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.991 14:01:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:23.991 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:23.991 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.991 14:01:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 ************************************ 00:15:23.991 START TEST lvs_grow_clean 00:15:23.991 ************************************ 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:23.991 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:23.992 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:23.992 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:24.252 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:24.252 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:24.514 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61de097a-333a-462b-a362-bd2c9e1e54bb lvol 150 00:15:24.775 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=12349cc3-c86b-446d-8967-10be31290919 00:15:24.775 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:24.775 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:24.775 [2024-07-15 14:01:22.814747] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:24.775 [2024-07-15 14:01:22.814802] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:24.775 true 00:15:24.775 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:24.776 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:25.037 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:25.037 14:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:25.037 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12349cc3-c86b-446d-8967-10be31290919 00:15:25.297 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:25.558 [2024-07-15 14:01:23.412597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1308544 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1308544 /var/tmp/bdevperf.sock 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1308544 ']' 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.558 14:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 [2024-07-15 14:01:23.627095] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.558 [2024-07-15 14:01:23.627144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308544 ] 00:15:25.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.819 [2024-07-15 14:01:23.706916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.819 [2024-07-15 14:01:23.770875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.390 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.390 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:26.390 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:26.651 Nvme0n1 00:15:26.651 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:26.911 [ 00:15:26.911 { 00:15:26.911 "name": "Nvme0n1", 00:15:26.911 "aliases": [ 00:15:26.911 "12349cc3-c86b-446d-8967-10be31290919" 00:15:26.911 ], 00:15:26.911 "product_name": "NVMe disk", 00:15:26.911 "block_size": 4096, 00:15:26.911 "num_blocks": 38912, 00:15:26.911 "uuid": "12349cc3-c86b-446d-8967-10be31290919", 00:15:26.911 "assigned_rate_limits": { 00:15:26.911 "rw_ios_per_sec": 0, 00:15:26.911 "rw_mbytes_per_sec": 0, 00:15:26.911 "r_mbytes_per_sec": 0, 00:15:26.911 "w_mbytes_per_sec": 0 00:15:26.911 }, 00:15:26.911 "claimed": false, 00:15:26.911 "zoned": false, 00:15:26.911 "supported_io_types": { 00:15:26.911 "read": true, 00:15:26.911 "write": true, 00:15:26.911 "unmap": true, 00:15:26.911 "flush": true, 00:15:26.911 "reset": true, 00:15:26.911 "nvme_admin": true, 00:15:26.911 "nvme_io": true, 00:15:26.911 "nvme_io_md": false, 00:15:26.911 "write_zeroes": true, 00:15:26.911 "zcopy": false, 00:15:26.911 "get_zone_info": false, 00:15:26.911 "zone_management": false, 00:15:26.911 "zone_append": false, 00:15:26.911 "compare": true, 00:15:26.911 "compare_and_write": true, 00:15:26.911 "abort": true, 00:15:26.911 "seek_hole": false, 00:15:26.911 "seek_data": false, 00:15:26.911 "copy": true, 00:15:26.911 "nvme_iov_md": false 00:15:26.911 }, 00:15:26.911 "memory_domains": [ 00:15:26.911 { 00:15:26.911 "dma_device_id": "system", 00:15:26.911 "dma_device_type": 1 00:15:26.911 } 00:15:26.911 ], 00:15:26.911 "driver_specific": { 00:15:26.911 "nvme": [ 00:15:26.911 { 00:15:26.911 "trid": { 00:15:26.911 "trtype": "TCP", 00:15:26.911 "adrfam": "IPv4", 00:15:26.911 "traddr": "10.0.0.2", 00:15:26.911 "trsvcid": "4420", 00:15:26.911 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:26.911 }, 00:15:26.911 "ctrlr_data": { 00:15:26.911 "cntlid": 1, 00:15:26.911 "vendor_id": "0x8086", 00:15:26.911 "model_number": "SPDK bdev Controller", 00:15:26.911 "serial_number": "SPDK0", 00:15:26.911 "firmware_revision": "24.09", 00:15:26.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:26.911 "oacs": { 00:15:26.911 "security": 0, 00:15:26.911 "format": 0, 00:15:26.911 "firmware": 0, 00:15:26.911 "ns_manage": 0 00:15:26.911 }, 00:15:26.911 "multi_ctrlr": true, 00:15:26.911 "ana_reporting": false 00:15:26.911 }, 00:15:26.911 "vs": { 00:15:26.911 "nvme_version": "1.3" 00:15:26.911 }, 00:15:26.911 "ns_data": { 00:15:26.911 "id": 1, 00:15:26.911 "can_share": true 00:15:26.911 } 00:15:26.911 } 00:15:26.911 ], 00:15:26.911 "mp_policy": "active_passive" 00:15:26.911 } 00:15:26.911 } 00:15:26.911 ] 00:15:26.911 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1308880 00:15:26.911 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:26.911 14:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.911 Running I/O for 10 seconds... 00:15:27.852 Latency(us) 00:15:27.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.852 Nvme0n1 : 1.00 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:15:27.852 =================================================================================================================== 00:15:27.852 Total : 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:15:27.852 00:15:28.794 14:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:28.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.794 Nvme0n1 : 2.00 18185.50 71.04 0.00 0.00 0.00 0.00 0.00 00:15:28.794 =================================================================================================================== 00:15:28.794 Total : 18185.50 71.04 0.00 0.00 0.00 0.00 0.00 00:15:28.794 00:15:29.053 true 00:15:29.053 14:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:29.053 14:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:29.053 14:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:29.053 14:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:29.053 14:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1308880 00:15:29.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.993 Nvme0n1 : 3.00 18223.33 71.18 0.00 0.00 0.00 0.00 0.00 00:15:29.993 =================================================================================================================== 00:15:29.993 Total : 18223.33 71.18 0.00 0.00 0.00 0.00 0.00 00:15:29.993 00:15:30.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.933 Nvme0n1 : 4.00 18259.25 71.33 0.00 0.00 0.00 0.00 0.00 00:15:30.933 =================================================================================================================== 00:15:30.933 Total : 18259.25 71.33 0.00 0.00 0.00 0.00 0.00 00:15:30.933 00:15:31.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.877 Nvme0n1 : 5.00 18287.40 71.44 0.00 0.00 0.00 0.00 0.00 00:15:31.877 =================================================================================================================== 00:15:31.877 Total : 18287.40 71.44 0.00 0.00 0.00 0.00 0.00 00:15:31.877 00:15:32.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.819 Nvme0n1 : 6.00 18298.17 71.48 0.00 0.00 0.00 0.00 0.00 00:15:32.819 =================================================================================================================== 00:15:32.819 Total : 18298.17 71.48 0.00 0.00 0.00 0.00 0.00 00:15:32.819 00:15:34.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.204 Nvme0n1 : 7.00 18316.00 71.55 0.00 0.00 0.00 0.00 0.00 00:15:34.204 =================================================================================================================== 00:15:34.204 Total : 18316.00 71.55 0.00 0.00 0.00 0.00 0.00 00:15:34.204 00:15:35.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.147 Nvme0n1 : 8.00 18328.25 71.59 0.00 0.00 0.00 0.00 0.00 00:15:35.147 =================================================================================================================== 00:15:35.147 Total : 18328.25 71.59 0.00 0.00 0.00 0.00 0.00 00:15:35.147 00:15:36.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.089 Nvme0n1 : 9.00 18338.00 71.63 0.00 0.00 0.00 0.00 0.00 00:15:36.089 =================================================================================================================== 00:15:36.089 Total : 18338.00 71.63 0.00 0.00 0.00 0.00 0.00 00:15:36.089 00:15:37.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.031 Nvme0n1 : 10.00 18345.40 71.66 0.00 0.00 0.00 0.00 0.00 00:15:37.031 =================================================================================================================== 00:15:37.031 Total : 18345.40 71.66 0.00 0.00 0.00 0.00 0.00 00:15:37.031 00:15:37.031 00:15:37.031 Latency(us) 00:15:37.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.031 Nvme0n1 : 10.00 18352.73 71.69 0.00 0.00 6972.22 4369.07 14090.24 00:15:37.031 =================================================================================================================== 00:15:37.031 Total : 18352.73 71.69 0.00 0.00 6972.22 4369.07 14090.24 00:15:37.031 0 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1308544 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1308544 ']' 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1308544 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1308544 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1308544' 00:15:37.031 killing process with pid 1308544 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1308544 00:15:37.031 Received shutdown signal, test time was about 10.000000 seconds 00:15:37.031 00:15:37.031 Latency(us) 00:15:37.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.031 =================================================================================================================== 00:15:37.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.031 14:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1308544 00:15:37.031 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:37.293 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:37.552 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:37.552 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:37.552 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:37.552 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:37.552 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:37.812 [2024-07-15 14:01:35.673152] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:37.812 request: 00:15:37.812 { 00:15:37.812 "uuid": "61de097a-333a-462b-a362-bd2c9e1e54bb", 00:15:37.812 "method": "bdev_lvol_get_lvstores", 00:15:37.812 "req_id": 1 00:15:37.812 } 00:15:37.812 Got JSON-RPC error response 00:15:37.812 response: 00:15:37.812 { 00:15:37.812 "code": -19, 00:15:37.812 "message": "No such device" 00:15:37.812 } 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:37.812 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:38.072 aio_bdev 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 12349cc3-c86b-446d-8967-10be31290919 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=12349cc3-c86b-446d-8967-10be31290919 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.072 14:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:38.072 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12349cc3-c86b-446d-8967-10be31290919 -t 2000 00:15:38.332 [ 00:15:38.332 { 00:15:38.332 "name": "12349cc3-c86b-446d-8967-10be31290919", 00:15:38.332 "aliases": [ 00:15:38.332 "lvs/lvol" 00:15:38.332 ], 00:15:38.332 "product_name": "Logical Volume", 00:15:38.332 "block_size": 4096, 00:15:38.332 "num_blocks": 38912, 00:15:38.332 "uuid": "12349cc3-c86b-446d-8967-10be31290919", 00:15:38.332 "assigned_rate_limits": { 00:15:38.332 "rw_ios_per_sec": 0, 00:15:38.332 "rw_mbytes_per_sec": 0, 00:15:38.332 "r_mbytes_per_sec": 0, 00:15:38.332 "w_mbytes_per_sec": 0 00:15:38.332 }, 00:15:38.332 "claimed": false, 00:15:38.332 "zoned": false, 00:15:38.332 "supported_io_types": { 00:15:38.332 "read": true, 00:15:38.332 "write": true, 00:15:38.332 "unmap": true, 00:15:38.332 "flush": false, 00:15:38.332 "reset": true, 00:15:38.332 "nvme_admin": false, 00:15:38.332 "nvme_io": false, 00:15:38.332 "nvme_io_md": false, 00:15:38.332 "write_zeroes": true, 00:15:38.332 "zcopy": false, 00:15:38.332 "get_zone_info": false, 00:15:38.332 "zone_management": false, 00:15:38.332 "zone_append": false, 00:15:38.332 "compare": false, 00:15:38.332 "compare_and_write": false, 00:15:38.332 "abort": false, 00:15:38.332 "seek_hole": true, 00:15:38.332 "seek_data": true, 00:15:38.332 "copy": false, 00:15:38.332 "nvme_iov_md": false 00:15:38.332 }, 00:15:38.332 "driver_specific": { 00:15:38.332 "lvol": { 00:15:38.332 "lvol_store_uuid": "61de097a-333a-462b-a362-bd2c9e1e54bb", 00:15:38.332 "base_bdev": "aio_bdev", 00:15:38.332 "thin_provision": false, 00:15:38.332 "num_allocated_clusters": 38, 00:15:38.332 "snapshot": false, 00:15:38.332 "clone": false, 00:15:38.332 "esnap_clone": false 00:15:38.332 } 00:15:38.332 } 00:15:38.332 } 00:15:38.332 ] 00:15:38.332 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:38.332 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:38.332 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:38.332 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:38.332 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:38.333 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:38.592 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:38.592 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12349cc3-c86b-446d-8967-10be31290919 00:15:38.852 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61de097a-333a-462b-a362-bd2c9e1e54bb 00:15:38.852 14:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.112 00:15:39.112 real 0m15.069s 00:15:39.112 user 0m14.867s 00:15:39.112 sys 0m1.196s 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:39.112 ************************************ 00:15:39.112 END TEST lvs_grow_clean 00:15:39.112 ************************************ 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:39.112 ************************************ 00:15:39.112 START TEST lvs_grow_dirty 00:15:39.112 ************************************ 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:39.112 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.113 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.113 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:39.372 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:39.372 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:39.372 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5622a866-04ad-462d-9494-894f0b0b45c2 00:15:39.633 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:39.633 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:39.633 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:39.633 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:39.633 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5622a866-04ad-462d-9494-894f0b0b45c2 lvol 150 00:15:39.892 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:39.892 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.892 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:39.892 [2024-07-15 14:01:37.903670] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:39.892 [2024-07-15 14:01:37.903721] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:39.892 true 00:15:39.892 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:39.892 14:01:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:40.152 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:40.152 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:40.152 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:40.416 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:40.416 [2024-07-15 14:01:38.497484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.416 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1311621 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1311621 /var/tmp/bdevperf.sock 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1311621 ']' 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 14:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:40.740 [2024-07-15 14:01:38.712747] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:40.740 [2024-07-15 14:01:38.712805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311621 ] 00:15:40.740 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.740 [2024-07-15 14:01:38.791134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.740 [2024-07-15 14:01:38.844923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.342 14:01:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.342 14:01:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:41.342 14:01:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:41.912 Nvme0n1 00:15:41.912 14:01:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:41.912 [ 00:15:41.912 { 00:15:41.912 "name": "Nvme0n1", 00:15:41.912 "aliases": [ 00:15:41.912 "212827f0-e3b3-4287-a49e-e39ec755ac4d" 00:15:41.912 ], 00:15:41.912 "product_name": "NVMe disk", 00:15:41.912 "block_size": 4096, 00:15:41.912 "num_blocks": 38912, 00:15:41.912 "uuid": "212827f0-e3b3-4287-a49e-e39ec755ac4d", 00:15:41.912 "assigned_rate_limits": { 00:15:41.912 "rw_ios_per_sec": 0, 00:15:41.912 "rw_mbytes_per_sec": 0, 00:15:41.912 "r_mbytes_per_sec": 0, 00:15:41.912 "w_mbytes_per_sec": 0 00:15:41.912 }, 00:15:41.912 "claimed": false, 00:15:41.912 "zoned": false, 00:15:41.912 "supported_io_types": { 00:15:41.912 "read": true, 00:15:41.912 "write": true, 00:15:41.912 "unmap": true, 00:15:41.912 "flush": true, 00:15:41.912 "reset": true, 00:15:41.912 "nvme_admin": true, 00:15:41.912 "nvme_io": true, 00:15:41.912 "nvme_io_md": false, 00:15:41.912 "write_zeroes": true, 00:15:41.912 "zcopy": false, 00:15:41.912 "get_zone_info": false, 00:15:41.912 "zone_management": false, 00:15:41.912 "zone_append": false, 00:15:41.912 "compare": true, 00:15:41.912 "compare_and_write": true, 00:15:41.912 "abort": true, 00:15:41.912 "seek_hole": false, 00:15:41.912 "seek_data": false, 00:15:41.912 "copy": true, 00:15:41.912 "nvme_iov_md": false 00:15:41.912 }, 00:15:41.912 "memory_domains": [ 00:15:41.912 { 00:15:41.912 "dma_device_id": "system", 00:15:41.912 "dma_device_type": 1 00:15:41.912 } 00:15:41.912 ], 00:15:41.912 "driver_specific": { 00:15:41.912 "nvme": [ 00:15:41.912 { 00:15:41.912 "trid": { 00:15:41.912 "trtype": "TCP", 00:15:41.912 "adrfam": "IPv4", 00:15:41.912 "traddr": "10.0.0.2", 00:15:41.912 "trsvcid": "4420", 00:15:41.912 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:41.912 }, 00:15:41.912 "ctrlr_data": { 00:15:41.912 "cntlid": 1, 00:15:41.912 "vendor_id": "0x8086", 00:15:41.912 "model_number": "SPDK bdev Controller", 00:15:41.912 "serial_number": "SPDK0", 00:15:41.912 "firmware_revision": "24.09", 00:15:41.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:41.912 "oacs": { 00:15:41.912 "security": 0, 00:15:41.912 "format": 0, 00:15:41.912 "firmware": 0, 00:15:41.912 "ns_manage": 0 00:15:41.912 }, 00:15:41.912 "multi_ctrlr": true, 00:15:41.912 "ana_reporting": false 00:15:41.912 }, 00:15:41.912 "vs": { 00:15:41.912 "nvme_version": "1.3" 00:15:41.912 }, 00:15:41.912 "ns_data": { 00:15:41.912 "id": 1, 00:15:41.912 "can_share": true 00:15:41.912 } 00:15:41.912 } 00:15:41.912 ], 00:15:41.912 "mp_policy": "active_passive" 00:15:41.912 } 00:15:41.912 } 00:15:41.912 ] 00:15:42.174 14:01:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1311960 00:15:42.174 14:01:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:42.174 14:01:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.174 Running I/O for 10 seconds... 00:15:43.116 Latency(us) 00:15:43.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.116 Nvme0n1 : 1.00 17703.00 69.15 0.00 0.00 0.00 0.00 0.00 00:15:43.116 =================================================================================================================== 00:15:43.116 Total : 17703.00 69.15 0.00 0.00 0.00 0.00 0.00 00:15:43.116 00:15:44.058 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:44.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.058 Nvme0n1 : 2.00 17759.50 69.37 0.00 0.00 0.00 0.00 0.00 00:15:44.058 =================================================================================================================== 00:15:44.058 Total : 17759.50 69.37 0.00 0.00 0.00 0.00 0.00 00:15:44.058 00:15:44.318 true 00:15:44.318 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:44.318 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:44.318 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:44.318 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:44.318 14:01:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1311960 00:15:45.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.265 Nvme0n1 : 3.00 17789.00 69.49 0.00 0.00 0.00 0.00 0.00 00:15:45.265 =================================================================================================================== 00:15:45.265 Total : 17789.00 69.49 0.00 0.00 0.00 0.00 0.00 00:15:45.265 00:15:46.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.207 Nvme0n1 : 4.00 17815.75 69.59 0.00 0.00 0.00 0.00 0.00 00:15:46.208 =================================================================================================================== 00:15:46.208 Total : 17815.75 69.59 0.00 0.00 0.00 0.00 0.00 00:15:46.208 00:15:47.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.148 Nvme0n1 : 5.00 17833.40 69.66 0.00 0.00 0.00 0.00 0.00 00:15:47.148 =================================================================================================================== 00:15:47.148 Total : 17833.40 69.66 0.00 0.00 0.00 0.00 0.00 00:15:47.148 00:15:48.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.113 Nvme0n1 : 6.00 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:15:48.113 =================================================================================================================== 00:15:48.113 Total : 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:15:48.113 00:15:49.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.056 Nvme0n1 : 7.00 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:15:49.056 =================================================================================================================== 00:15:49.056 Total : 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:15:49.056 00:15:50.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.439 Nvme0n1 : 8.00 17876.88 69.83 0.00 0.00 0.00 0.00 0.00 00:15:50.439 =================================================================================================================== 00:15:50.439 Total : 17876.88 69.83 0.00 0.00 0.00 0.00 0.00 00:15:50.439 00:15:51.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.381 Nvme0n1 : 9.00 17888.78 69.88 0.00 0.00 0.00 0.00 0.00 00:15:51.381 =================================================================================================================== 00:15:51.381 Total : 17888.78 69.88 0.00 0.00 0.00 0.00 0.00 00:15:51.381 00:15:52.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.322 Nvme0n1 : 10.00 17895.90 69.91 0.00 0.00 0.00 0.00 0.00 00:15:52.322 =================================================================================================================== 00:15:52.322 Total : 17895.90 69.91 0.00 0.00 0.00 0.00 0.00 00:15:52.322 00:15:52.322 00:15:52.322 Latency(us) 00:15:52.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.322 Nvme0n1 : 10.01 17895.88 69.91 0.00 0.00 7147.86 1720.32 9065.81 00:15:52.322 =================================================================================================================== 00:15:52.322 Total : 17895.88 69.91 0.00 0.00 7147.86 1720.32 9065.81 00:15:52.322 0 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1311621 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1311621 ']' 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1311621 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1311621 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1311621' 00:15:52.322 killing process with pid 1311621 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1311621 00:15:52.322 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.322 00:15:52.322 Latency(us) 00:15:52.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.322 =================================================================================================================== 00:15:52.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1311621 00:15:52.322 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:52.584 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:52.584 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:52.584 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1308131 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1308131 00:15:52.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1308131 Killed "${NVMF_APP[@]}" "$@" 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1313999 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1313999 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1313999 ']' 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.844 14:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:52.844 [2024-07-15 14:01:50.949430] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:52.845 [2024-07-15 14:01:50.949488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.105 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.105 [2024-07-15 14:01:51.022787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.105 [2024-07-15 14:01:51.087884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.105 [2024-07-15 14:01:51.087922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.106 [2024-07-15 14:01:51.087930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.106 [2024-07-15 14:01:51.087936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.106 [2024-07-15 14:01:51.087941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.106 [2024-07-15 14:01:51.087959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.677 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:53.938 [2024-07-15 14:01:51.880434] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:53.938 [2024-07-15 14:01:51.880517] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:53.938 [2024-07-15 14:01:51.880547] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:53.938 14:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:54.199 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 212827f0-e3b3-4287-a49e-e39ec755ac4d -t 2000 00:15:54.199 [ 00:15:54.199 { 00:15:54.199 "name": "212827f0-e3b3-4287-a49e-e39ec755ac4d", 00:15:54.199 "aliases": [ 00:15:54.199 "lvs/lvol" 00:15:54.199 ], 00:15:54.199 "product_name": "Logical Volume", 00:15:54.199 "block_size": 4096, 00:15:54.199 "num_blocks": 38912, 00:15:54.199 "uuid": "212827f0-e3b3-4287-a49e-e39ec755ac4d", 00:15:54.199 "assigned_rate_limits": { 00:15:54.199 "rw_ios_per_sec": 0, 00:15:54.199 "rw_mbytes_per_sec": 0, 00:15:54.199 "r_mbytes_per_sec": 0, 00:15:54.199 "w_mbytes_per_sec": 0 00:15:54.199 }, 00:15:54.199 "claimed": false, 00:15:54.199 "zoned": false, 00:15:54.199 "supported_io_types": { 00:15:54.199 "read": true, 00:15:54.199 "write": true, 00:15:54.199 "unmap": true, 00:15:54.199 "flush": false, 00:15:54.199 "reset": true, 00:15:54.199 "nvme_admin": false, 00:15:54.199 "nvme_io": false, 00:15:54.199 "nvme_io_md": false, 00:15:54.199 "write_zeroes": true, 00:15:54.199 "zcopy": false, 00:15:54.199 "get_zone_info": false, 00:15:54.199 "zone_management": false, 00:15:54.199 "zone_append": false, 00:15:54.199 "compare": false, 00:15:54.199 "compare_and_write": false, 00:15:54.199 "abort": false, 00:15:54.199 "seek_hole": true, 00:15:54.199 "seek_data": true, 00:15:54.199 "copy": false, 00:15:54.199 "nvme_iov_md": false 00:15:54.199 }, 00:15:54.199 "driver_specific": { 00:15:54.199 "lvol": { 00:15:54.199 "lvol_store_uuid": "5622a866-04ad-462d-9494-894f0b0b45c2", 00:15:54.199 "base_bdev": "aio_bdev", 00:15:54.199 "thin_provision": false, 00:15:54.199 "num_allocated_clusters": 38, 00:15:54.199 "snapshot": false, 00:15:54.199 "clone": false, 00:15:54.199 "esnap_clone": false 00:15:54.199 } 00:15:54.199 } 00:15:54.199 } 00:15:54.199 ] 00:15:54.199 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:54.199 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:54.199 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:54.460 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:54.460 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:54.460 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:54.460 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:54.460 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:54.721 [2024-07-15 14:01:52.648379] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:54.721 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:54.721 request: 00:15:54.721 { 00:15:54.721 "uuid": "5622a866-04ad-462d-9494-894f0b0b45c2", 00:15:54.721 "method": "bdev_lvol_get_lvstores", 00:15:54.721 "req_id": 1 00:15:54.722 } 00:15:54.722 Got JSON-RPC error response 00:15:54.722 response: 00:15:54.722 { 00:15:54.722 "code": -19, 00:15:54.722 "message": "No such device" 00:15:54.722 } 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:54.982 aio_bdev 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:54.982 14:01:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:55.244 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 212827f0-e3b3-4287-a49e-e39ec755ac4d -t 2000 00:15:55.244 [ 00:15:55.244 { 00:15:55.244 "name": "212827f0-e3b3-4287-a49e-e39ec755ac4d", 00:15:55.244 "aliases": [ 00:15:55.244 "lvs/lvol" 00:15:55.244 ], 00:15:55.244 "product_name": "Logical Volume", 00:15:55.244 "block_size": 4096, 00:15:55.244 "num_blocks": 38912, 00:15:55.244 "uuid": "212827f0-e3b3-4287-a49e-e39ec755ac4d", 00:15:55.244 "assigned_rate_limits": { 00:15:55.244 "rw_ios_per_sec": 0, 00:15:55.244 "rw_mbytes_per_sec": 0, 00:15:55.244 "r_mbytes_per_sec": 0, 00:15:55.244 "w_mbytes_per_sec": 0 00:15:55.244 }, 00:15:55.244 "claimed": false, 00:15:55.244 "zoned": false, 00:15:55.244 "supported_io_types": { 00:15:55.244 "read": true, 00:15:55.244 "write": true, 00:15:55.244 "unmap": true, 00:15:55.244 "flush": false, 00:15:55.244 "reset": true, 00:15:55.244 "nvme_admin": false, 00:15:55.244 "nvme_io": false, 00:15:55.244 "nvme_io_md": false, 00:15:55.244 "write_zeroes": true, 00:15:55.244 "zcopy": false, 00:15:55.244 "get_zone_info": false, 00:15:55.244 "zone_management": false, 00:15:55.244 "zone_append": false, 00:15:55.244 "compare": false, 00:15:55.244 "compare_and_write": false, 00:15:55.244 "abort": false, 00:15:55.244 "seek_hole": true, 00:15:55.244 "seek_data": true, 00:15:55.244 "copy": false, 00:15:55.244 "nvme_iov_md": false 00:15:55.244 }, 00:15:55.244 "driver_specific": { 00:15:55.244 "lvol": { 00:15:55.244 "lvol_store_uuid": "5622a866-04ad-462d-9494-894f0b0b45c2", 00:15:55.244 "base_bdev": "aio_bdev", 00:15:55.244 "thin_provision": false, 00:15:55.244 "num_allocated_clusters": 38, 00:15:55.244 "snapshot": false, 00:15:55.244 "clone": false, 00:15:55.244 "esnap_clone": false 00:15:55.244 } 00:15:55.244 } 00:15:55.244 } 00:15:55.244 ] 00:15:55.244 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:55.244 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:55.244 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:55.505 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:55.505 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:55.505 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:55.765 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:55.765 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 212827f0-e3b3-4287-a49e-e39ec755ac4d 00:15:55.765 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5622a866-04ad-462d-9494-894f0b0b45c2 00:15:56.025 14:01:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:56.025 14:01:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:56.025 00:15:56.025 real 0m16.977s 00:15:56.025 user 0m44.161s 00:15:56.025 sys 0m2.999s 00:15:56.025 14:01:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.025 14:01:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:56.025 ************************************ 00:15:56.025 END TEST lvs_grow_dirty 00:15:56.025 ************************************ 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:56.286 nvmf_trace.0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.286 rmmod nvme_tcp 00:15:56.286 rmmod nvme_fabrics 00:15:56.286 rmmod nvme_keyring 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1313999 ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1313999 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1313999 ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1313999 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1313999 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1313999' 00:15:56.286 killing process with pid 1313999 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1313999 00:15:56.286 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1313999 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.547 14:01:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.458 14:01:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.458 00:15:58.458 real 0m43.717s 00:15:58.458 user 1m5.132s 00:15:58.458 sys 0m10.561s 00:15:58.458 14:01:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.458 14:01:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:58.458 ************************************ 00:15:58.458 END TEST nvmf_lvs_grow 00:15:58.458 ************************************ 00:15:58.720 14:01:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.720 14:01:56 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:58.720 14:01:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.720 14:01:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.720 14:01:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.720 ************************************ 00:15:58.720 START TEST nvmf_bdev_io_wait 00:15:58.720 ************************************ 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:58.720 * Looking for test storage... 00:15:58.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.720 14:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:06.863 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:06.863 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:06.863 Found net devices under 0000:31:00.0: cvl_0_0 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:06.863 Found net devices under 0000:31:00.1: cvl_0_1 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:16:06.863 00:16:06.863 --- 10.0.0.2 ping statistics --- 00:16:06.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.863 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:16:06.863 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:16:06.863 00:16:06.863 --- 10.0.0.1 ping statistics --- 00:16:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.864 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1319400 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1319400 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1319400 ']' 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.864 14:02:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:06.864 [2024-07-15 14:02:04.478914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:06.864 [2024-07-15 14:02:04.478961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.864 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.864 [2024-07-15 14:02:04.552672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.864 [2024-07-15 14:02:04.619031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.864 [2024-07-15 14:02:04.619070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.864 [2024-07-15 14:02:04.619078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.864 [2024-07-15 14:02:04.619087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.864 [2024-07-15 14:02:04.619092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.864 [2024-07-15 14:02:04.619232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.864 [2024-07-15 14:02:04.619354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.864 [2024-07-15 14:02:04.619510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.864 [2024-07-15 14:02:04.619511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 [2024-07-15 14:02:05.355419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 Malloc0 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:07.435 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 [2024-07-15 14:02:05.423184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1319464 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1319467 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.436 { 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme$subsystem", 00:16:07.436 "trtype": "$TEST_TRANSPORT", 00:16:07.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "$NVMF_PORT", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.436 "hdgst": ${hdgst:-false}, 00:16:07.436 "ddgst": ${ddgst:-false} 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 } 00:16:07.436 EOF 00:16:07.436 )") 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1319470 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.436 { 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme$subsystem", 00:16:07.436 "trtype": "$TEST_TRANSPORT", 00:16:07.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "$NVMF_PORT", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.436 "hdgst": ${hdgst:-false}, 00:16:07.436 "ddgst": ${ddgst:-false} 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 } 00:16:07.436 EOF 00:16:07.436 )") 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1319475 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.436 { 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme$subsystem", 00:16:07.436 "trtype": "$TEST_TRANSPORT", 00:16:07.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "$NVMF_PORT", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.436 "hdgst": ${hdgst:-false}, 00:16:07.436 "ddgst": ${ddgst:-false} 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 } 00:16:07.436 EOF 00:16:07.436 )") 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.436 { 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme$subsystem", 00:16:07.436 "trtype": "$TEST_TRANSPORT", 00:16:07.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "$NVMF_PORT", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.436 "hdgst": ${hdgst:-false}, 00:16:07.436 "ddgst": ${ddgst:-false} 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 } 00:16:07.436 EOF 00:16:07.436 )") 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1319464 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme1", 00:16:07.436 "trtype": "tcp", 00:16:07.436 "traddr": "10.0.0.2", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "4420", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.436 "hdgst": false, 00:16:07.436 "ddgst": false 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 }' 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme1", 00:16:07.436 "trtype": "tcp", 00:16:07.436 "traddr": "10.0.0.2", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "4420", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.436 "hdgst": false, 00:16:07.436 "ddgst": false 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 }' 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme1", 00:16:07.436 "trtype": "tcp", 00:16:07.436 "traddr": "10.0.0.2", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "4420", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.436 "hdgst": false, 00:16:07.436 "ddgst": false 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 }' 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:07.436 14:02:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.436 "params": { 00:16:07.436 "name": "Nvme1", 00:16:07.436 "trtype": "tcp", 00:16:07.436 "traddr": "10.0.0.2", 00:16:07.436 "adrfam": "ipv4", 00:16:07.436 "trsvcid": "4420", 00:16:07.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.436 "hdgst": false, 00:16:07.436 "ddgst": false 00:16:07.436 }, 00:16:07.436 "method": "bdev_nvme_attach_controller" 00:16:07.436 }' 00:16:07.436 [2024-07-15 14:02:05.475313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:07.436 [2024-07-15 14:02:05.475365] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:07.436 [2024-07-15 14:02:05.476839] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:07.436 [2024-07-15 14:02:05.476887] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:07.436 [2024-07-15 14:02:05.479828] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:07.436 [2024-07-15 14:02:05.479874] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:07.436 [2024-07-15 14:02:05.481024] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:07.437 [2024-07-15 14:02:05.481069] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:07.437 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.698 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.698 [2024-07-15 14:02:05.631069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.698 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.698 [2024-07-15 14:02:05.682083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:07.698 [2024-07-15 14:02:05.688572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.698 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.698 [2024-07-15 14:02:05.738551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:07.698 [2024-07-15 14:02:05.750330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.698 [2024-07-15 14:02:05.800966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:07.698 [2024-07-15 14:02:05.810711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.958 [2024-07-15 14:02:05.862333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:07.958 Running I/O for 1 seconds... 00:16:07.958 Running I/O for 1 seconds... 00:16:07.958 Running I/O for 1 seconds... 00:16:08.218 Running I/O for 1 seconds... 00:16:09.162 00:16:09.162 Latency(us) 00:16:09.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.162 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:09.162 Nvme1n1 : 1.01 11734.58 45.84 0.00 0.00 10838.82 4860.59 16274.77 00:16:09.162 =================================================================================================================== 00:16:09.162 Total : 11734.58 45.84 0.00 0.00 10838.82 4860.59 16274.77 00:16:09.162 00:16:09.162 Latency(us) 00:16:09.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.162 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:09.162 Nvme1n1 : 1.00 187924.05 734.08 0.00 0.00 677.98 269.65 785.07 00:16:09.162 =================================================================================================================== 00:16:09.162 Total : 187924.05 734.08 0.00 0.00 677.98 269.65 785.07 00:16:09.162 00:16:09.162 Latency(us) 00:16:09.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.162 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:09.162 Nvme1n1 : 1.00 11650.39 45.51 0.00 0.00 10965.40 3153.92 24794.45 00:16:09.162 =================================================================================================================== 00:16:09.162 Total : 11650.39 45.51 0.00 0.00 10965.40 3153.92 24794.45 00:16:09.162 00:16:09.162 Latency(us) 00:16:09.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.162 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:09.162 Nvme1n1 : 1.01 13010.44 50.82 0.00 0.00 9803.82 6335.15 18568.53 00:16:09.162 =================================================================================================================== 00:16:09.162 Total : 13010.44 50.82 0.00 0.00 9803.82 6335.15 18568.53 00:16:09.162 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1319467 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1319470 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1319475 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.424 rmmod nvme_tcp 00:16:09.424 rmmod nvme_fabrics 00:16:09.424 rmmod nvme_keyring 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1319400 ']' 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1319400 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1319400 ']' 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1319400 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1319400 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1319400' 00:16:09.424 killing process with pid 1319400 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1319400 00:16:09.424 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1319400 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.758 14:02:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.673 14:02:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.673 00:16:11.673 real 0m12.993s 00:16:11.673 user 0m19.611s 00:16:11.673 sys 0m6.986s 00:16:11.673 14:02:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.673 14:02:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:11.673 ************************************ 00:16:11.673 END TEST nvmf_bdev_io_wait 00:16:11.673 ************************************ 00:16:11.673 14:02:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:11.673 14:02:09 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:11.673 14:02:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.673 14:02:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.673 14:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.673 ************************************ 00:16:11.673 START TEST nvmf_queue_depth 00:16:11.673 ************************************ 00:16:11.673 14:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:11.934 * Looking for test storage... 00:16:11.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.934 14:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:20.072 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.072 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:20.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:20.073 Found net devices under 0000:31:00.0: cvl_0_0 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:20.073 Found net devices under 0000:31:00.1: cvl_0_1 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:16:20.073 00:16:20.073 --- 10.0.0.2 ping statistics --- 00:16:20.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.073 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:16:20.073 00:16:20.073 --- 10.0.0.1 ping statistics --- 00:16:20.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.073 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1324563 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1324563 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1324563 ']' 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.073 14:02:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.073 [2024-07-15 14:02:17.930936] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:20.073 [2024-07-15 14:02:17.931004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.073 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.073 [2024-07-15 14:02:18.026407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.073 [2024-07-15 14:02:18.118576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.073 [2024-07-15 14:02:18.118639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.073 [2024-07-15 14:02:18.118647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.073 [2024-07-15 14:02:18.118654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.073 [2024-07-15 14:02:18.118660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.073 [2024-07-15 14:02:18.118685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.644 [2024-07-15 14:02:18.722620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.644 Malloc0 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.644 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 [2024-07-15 14:02:18.775289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1324823 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1324823 /var/tmp/bdevperf.sock 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1324823 ']' 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.905 14:02:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 [2024-07-15 14:02:18.802632] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:20.905 [2024-07-15 14:02:18.802677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324823 ] 00:16:20.905 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.905 [2024-07-15 14:02:18.860440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.905 [2024-07-15 14:02:18.924827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.477 14:02:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.477 14:02:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:21.477 14:02:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:21.738 14:02:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.738 14:02:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.738 NVMe0n1 00:16:21.738 14:02:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.738 14:02:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:21.738 Running I/O for 10 seconds... 00:16:31.740 00:16:31.740 Latency(us) 00:16:31.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.740 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:31.740 Verification LBA range: start 0x0 length 0x4000 00:16:31.740 NVMe0n1 : 10.04 11328.84 44.25 0.00 0.00 90058.23 4396.37 77332.48 00:16:31.740 =================================================================================================================== 00:16:31.740 Total : 11328.84 44.25 0.00 0.00 90058.23 4396.37 77332.48 00:16:31.740 0 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1324823 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1324823 ']' 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1324823 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1324823 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1324823' 00:16:32.004 killing process with pid 1324823 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1324823 00:16:32.004 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.004 00:16:32.004 Latency(us) 00:16:32.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.004 =================================================================================================================== 00:16:32.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.004 14:02:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1324823 00:16:32.004 14:02:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:32.004 14:02:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:32.004 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.005 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:32.005 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.005 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:32.005 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.005 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.005 rmmod nvme_tcp 00:16:32.005 rmmod nvme_fabrics 00:16:32.005 rmmod nvme_keyring 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1324563 ']' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1324563 ']' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1324563' 00:16:32.266 killing process with pid 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1324563 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.266 14:02:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.813 14:02:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.813 00:16:34.813 real 0m22.673s 00:16:34.813 user 0m25.642s 00:16:34.813 sys 0m7.030s 00:16:34.813 14:02:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.813 14:02:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:34.813 ************************************ 00:16:34.813 END TEST nvmf_queue_depth 00:16:34.813 ************************************ 00:16:34.813 14:02:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.813 14:02:32 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:34.813 14:02:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.813 14:02:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.813 14:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.813 ************************************ 00:16:34.813 START TEST nvmf_target_multipath 00:16:34.813 ************************************ 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:34.813 * Looking for test storage... 00:16:34.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.813 14:02:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.950 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:42.951 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:42.951 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:42.951 Found net devices under 0000:31:00.0: cvl_0_0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:42.951 Found net devices under 0000:31:00.1: cvl_0_1 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:16:42.951 00:16:42.951 --- 10.0.0.2 ping statistics --- 00:16:42.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.951 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:16:42.951 00:16:42.951 --- 10.0.0.1 ping statistics --- 00:16:42.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.951 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:42.951 only one NIC for nvmf test 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.951 rmmod nvme_tcp 00:16:42.951 rmmod nvme_fabrics 00:16:42.951 rmmod nvme_keyring 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.951 14:02:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.866 00:16:44.866 real 0m10.479s 00:16:44.866 user 0m2.439s 00:16:44.866 sys 0m5.942s 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.866 14:02:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:44.866 ************************************ 00:16:44.866 END TEST nvmf_target_multipath 00:16:44.866 ************************************ 00:16:45.128 14:02:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:45.128 14:02:42 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:45.128 14:02:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.128 14:02:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.128 14:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.128 ************************************ 00:16:45.128 START TEST nvmf_zcopy 00:16:45.128 ************************************ 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:45.128 * Looking for test storage... 00:16:45.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.128 14:02:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.129 14:02:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:53.273 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:53.274 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:53.274 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:53.274 Found net devices under 0000:31:00.0: cvl_0_0 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:53.274 Found net devices under 0000:31:00.1: cvl_0_1 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.274 14:02:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:16:53.274 00:16:53.274 --- 10.0.0.2 ping statistics --- 00:16:53.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.274 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:16:53.274 00:16:53.274 --- 10.0.0.1 ping statistics --- 00:16:53.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.274 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1336329 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1336329 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1336329 ']' 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.274 14:02:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.536 [2024-07-15 14:02:51.394913] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:53.537 [2024-07-15 14:02:51.394978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.537 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.537 [2024-07-15 14:02:51.493640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.537 [2024-07-15 14:02:51.585815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.537 [2024-07-15 14:02:51.585872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.537 [2024-07-15 14:02:51.585881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.537 [2024-07-15 14:02:51.585888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.537 [2024-07-15 14:02:51.585894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.537 [2024-07-15 14:02:51.585919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.109 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.109 [2024-07-15 14:02:52.221473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.370 [2024-07-15 14:02:52.237694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.370 malloc0 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:54.370 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:54.370 { 00:16:54.370 "params": { 00:16:54.370 "name": "Nvme$subsystem", 00:16:54.370 "trtype": "$TEST_TRANSPORT", 00:16:54.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.370 "adrfam": "ipv4", 00:16:54.370 "trsvcid": "$NVMF_PORT", 00:16:54.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.370 "hdgst": ${hdgst:-false}, 00:16:54.370 "ddgst": ${ddgst:-false} 00:16:54.370 }, 00:16:54.371 "method": "bdev_nvme_attach_controller" 00:16:54.371 } 00:16:54.371 EOF 00:16:54.371 )") 00:16:54.371 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:54.371 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:54.371 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:54.371 14:02:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:54.371 "params": { 00:16:54.371 "name": "Nvme1", 00:16:54.371 "trtype": "tcp", 00:16:54.371 "traddr": "10.0.0.2", 00:16:54.371 "adrfam": "ipv4", 00:16:54.371 "trsvcid": "4420", 00:16:54.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.371 "hdgst": false, 00:16:54.371 "ddgst": false 00:16:54.371 }, 00:16:54.371 "method": "bdev_nvme_attach_controller" 00:16:54.371 }' 00:16:54.371 [2024-07-15 14:02:52.322695] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:54.371 [2024-07-15 14:02:52.322764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336538 ] 00:16:54.371 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.371 [2024-07-15 14:02:52.393377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.371 [2024-07-15 14:02:52.467274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.944 Running I/O for 10 seconds... 00:17:04.991 00:17:04.991 Latency(us) 00:17:04.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:04.991 Verification LBA range: start 0x0 length 0x1000 00:17:04.991 Nvme1n1 : 10.01 8784.21 68.63 0.00 0.00 14518.92 1515.52 26869.76 00:17:04.991 =================================================================================================================== 00:17:04.991 Total : 8784.21 68.63 0.00 0.00 14518.92 1515.52 26869.76 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1338659 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.991 { 00:17:04.991 "params": { 00:17:04.991 "name": "Nvme$subsystem", 00:17:04.991 "trtype": "$TEST_TRANSPORT", 00:17:04.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.991 "adrfam": "ipv4", 00:17:04.991 "trsvcid": "$NVMF_PORT", 00:17:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.991 "hdgst": ${hdgst:-false}, 00:17:04.991 "ddgst": ${ddgst:-false} 00:17:04.991 }, 00:17:04.991 "method": "bdev_nvme_attach_controller" 00:17:04.991 } 00:17:04.991 EOF 00:17:04.991 )") 00:17:04.991 [2024-07-15 14:03:02.942384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.942410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:04.991 [2024-07-15 14:03:02.950378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.950387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:04.991 14:03:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.991 "params": { 00:17:04.991 "name": "Nvme1", 00:17:04.991 "trtype": "tcp", 00:17:04.991 "traddr": "10.0.0.2", 00:17:04.991 "adrfam": "ipv4", 00:17:04.991 "trsvcid": "4420", 00:17:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.991 "hdgst": false, 00:17:04.991 "ddgst": false 00:17:04.991 }, 00:17:04.991 "method": "bdev_nvme_attach_controller" 00:17:04.991 }' 00:17:04.991 [2024-07-15 14:03:02.958395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.958403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:02.966414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.966421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:02.974435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.974442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:02.982455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.982463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:02.983848] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:04.991 [2024-07-15 14:03:02.983896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338659 ] 00:17:04.991 [2024-07-15 14:03:02.990475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.990483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:02.998494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:02.998502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.006514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.006522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.991 [2024-07-15 14:03:03.014535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.014543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.022556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.022563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.030577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.030585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.038597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.038605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.046618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.046626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.048335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.991 [2024-07-15 14:03:03.054639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.054647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.062659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.062666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.070679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.070687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.078700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.078709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.086722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.086735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.094741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.094755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.991 [2024-07-15 14:03:03.102765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.991 [2024-07-15 14:03:03.102773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.110785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.110794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.111916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.253 [2024-07-15 14:03:03.118802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.118809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.126827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.126840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.134848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.134859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.142874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.142884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.154895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.154904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.162915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.162923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.170934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.170943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.178954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.178962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.186979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.186991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.195006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.195017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.203022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.203031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.211041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.211050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.219061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.219071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.227082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.227090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.235105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.235113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.243127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.243135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.251148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.251156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.259171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.259181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.267192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.267201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.275216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.275229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.283234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.283242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.291256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.291264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.299278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.299286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.307300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.307308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.315320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.315328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.323341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.323349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.331361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.331369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.339382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.339390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.347403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.347411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.355425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.355435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.253 [2024-07-15 14:03:03.363445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.253 [2024-07-15 14:03:03.363453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.371466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.371474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.379487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.379495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.387509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.387516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.395531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.395539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.403551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.403559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.411583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.411599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.419596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.419605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 Running I/O for 5 seconds... 00:17:05.513 [2024-07-15 14:03:03.429599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.429617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.438411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.438428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.447064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.447082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.456305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.456321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.464719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.464734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.473587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.473604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.482082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.482099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.491174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.491190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.500177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.500193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.508938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.508954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.517092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.517107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.525388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.525404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.513 [2024-07-15 14:03:03.534004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.513 [2024-07-15 14:03:03.534020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.543007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.543022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.552022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.552037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.560690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.560706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.569654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.569670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.578543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.578558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.586826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.586841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.595602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.595621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.604680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.604696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.613626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.613641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.514 [2024-07-15 14:03:03.622449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.514 [2024-07-15 14:03:03.622465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.631458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.631473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.639825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.639841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.648967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.648983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.657392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.657407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.666235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.666250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.675139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.675155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.683822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.683837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.692881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.692896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.701120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.701136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.709696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.709712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.718782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.718798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.727746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.727767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.736558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.736574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.745639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.745655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.754316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.754331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.763521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.763537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.772149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.772164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.781031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.781047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.789926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.789942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.798668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.798683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.807141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.807157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.816351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.816366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.824570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.824585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.833087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.833102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.841809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.841825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.850572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.850588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.859708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.859724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.868069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.868084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.876101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.876117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.774 [2024-07-15 14:03:03.885157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.774 [2024-07-15 14:03:03.885174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.894095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.894111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.902676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.902691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.911078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.911093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.919821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.919836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.928701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.928717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.937708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.937724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.946721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.946736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.955218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.955234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.964176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.964191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.972556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.972571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.035 [2024-07-15 14:03:03.981392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.035 [2024-07-15 14:03:03.981407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:03.990090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:03.990106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:03.998593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:03.998608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.006757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.006772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.015439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.015454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.024119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.024134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.033337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.033352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.041819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.041834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.050789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.050805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.059491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.059505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.068137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.068152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.076953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.076968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.085182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.085197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.093852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.093868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.102862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.102877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.110888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.110902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.119774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.119791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.128830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.128845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.137615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.137630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.036 [2024-07-15 14:03:04.146327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.036 [2024-07-15 14:03:04.146342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.154734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.154749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.163527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.163542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.172113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.172128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.180469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.180485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.189618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.189634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.198301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.198317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.206807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.206822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.215788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.215804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.224820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.297 [2024-07-15 14:03:04.224834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.297 [2024-07-15 14:03:04.233214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.233229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.242057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.242072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.250631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.250646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.259538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.259553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.268545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.268560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.277313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.277328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.286153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.286168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.294906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.294922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.303559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.303574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.312553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.312568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.321107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.321122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.329678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.329693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.338700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.338715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.347692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.347706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.356668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.356683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.365623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.365638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.374215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.374230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.382622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.382638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.391409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.391424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.399931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.399946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.298 [2024-07-15 14:03:04.409262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.298 [2024-07-15 14:03:04.409277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.417215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.417234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.425635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.425651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.434255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.434270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.443356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.443371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.452404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.452419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.461139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.461154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.470132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.470147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.478586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.478601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.487185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.487200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.496178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.496194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.504607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.504622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.513388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.513403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.522045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.522059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.531072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.531086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.540238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.540253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.548946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.548961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.557870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.557886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.567059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.567074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.575744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.575765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.584028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.584047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.593152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.593166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.601475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.601490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.610532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.610547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.618966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.618981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.627882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.627898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.636517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.636532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.645143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.645158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.654064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.654079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.662555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.662571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.559 [2024-07-15 14:03:04.671786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.559 [2024-07-15 14:03:04.671802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.680791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.680806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.689829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.689844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.698807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.698822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.707107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.707123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.716077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.716093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.724671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.724687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.733359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.733374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.742318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.742333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.751487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.751506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.760579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.760595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.768925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.768940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.777536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.820 [2024-07-15 14:03:04.777552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.820 [2024-07-15 14:03:04.786479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.786494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.795376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.795391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.804411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.804426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.813295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.813310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.821815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.821831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.830769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.830784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.839136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.839151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.847967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.847982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.856320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.856335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.865535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.865551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.874024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.874039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.883134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.883150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.891776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.891791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.900746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.900766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.909221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.909237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.917779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.917798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.821 [2024-07-15 14:03:04.926249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.821 [2024-07-15 14:03:04.926265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.934969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.082 [2024-07-15 14:03:04.934985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.943457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.082 [2024-07-15 14:03:04.943474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.951463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.082 [2024-07-15 14:03:04.951478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.960155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.082 [2024-07-15 14:03:04.960171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.968896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.082 [2024-07-15 14:03:04.968911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.082 [2024-07-15 14:03:04.977759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:04.977775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:04.986025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:04.986041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:04.994628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:04.994644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.003724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.003740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.012000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.012015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.021207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.021223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.029164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.029179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.037988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.038003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.047030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.047045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.055431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.055446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.064198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.064213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.072880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.072895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.081898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.081913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.090872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.090887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.099786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.099801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.109021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.109036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.117392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.117407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.126072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.126087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.134925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.134940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.143613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.143628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.152417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.152432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.161030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.161045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.169806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.169821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.178722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.178738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.187508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.187524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.083 [2024-07-15 14:03:05.196504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.083 [2024-07-15 14:03:05.196520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.344 [2024-07-15 14:03:05.205539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.344 [2024-07-15 14:03:05.205554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.344 [2024-07-15 14:03:05.214487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.344 [2024-07-15 14:03:05.214502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.344 [2024-07-15 14:03:05.223223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.344 [2024-07-15 14:03:05.223239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.344 [2024-07-15 14:03:05.232026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.344 [2024-07-15 14:03:05.232042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.344 [2024-07-15 14:03:05.240762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.344 [2024-07-15 14:03:05.240777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.249647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.249662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.258686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.258701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.267639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.267655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.276806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.276821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.285375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.285390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.293663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.293679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.302479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.302495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.311466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.311481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.320348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.320363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.329390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.329406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.338439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.338455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.347592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.347608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.356115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.356130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.364824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.364839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.373491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.373507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.382280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.382295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.391076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.391092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.400065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.400080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.408928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.408943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.417873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.417888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.426608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.426623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.435393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.435410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.444406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.444423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.345 [2024-07-15 14:03:05.452712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.345 [2024-07-15 14:03:05.452727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.606 [2024-07-15 14:03:05.461574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.606 [2024-07-15 14:03:05.461589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.606 [2024-07-15 14:03:05.470380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.606 [2024-07-15 14:03:05.470395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.478896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.478911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.487630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.487645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.496001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.496017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.504982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.504997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.513920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.513935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.522058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.522073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.530594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.530610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.539594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.539610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.548543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.548559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.557012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.557027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.565984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.565999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.574855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.574870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.583556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.583572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.592737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.592757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.601540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.601555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.610603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.610618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.618905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.618920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.627383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.627398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.636399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.636414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.645480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.645496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.653252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.653266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.662465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.662479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.670244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.670259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.679465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.679480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.687886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.687902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.697094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.697109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.705491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.705506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.607 [2024-07-15 14:03:05.713733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.607 [2024-07-15 14:03:05.713748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.722092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.722108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.730962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.730978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.739022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.739041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.747629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.747644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.756615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.756630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.765579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.765594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.774050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.774064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.783196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.783210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.792014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.792029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.800657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.800673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.809243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.809258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.817569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.817585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.825830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.825845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.834852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.834867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.843153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.843168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.852036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.852051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.860784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.860799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.869676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.869691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.878460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.878474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.887061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.887076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.895793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.895808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.904823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.904841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.913527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.913541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.922465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.922480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.931647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.931662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.940570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.940585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.949493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.949509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.958441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.958456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.869 [2024-07-15 14:03:05.967214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.869 [2024-07-15 14:03:05.967229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.870 [2024-07-15 14:03:05.976170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.870 [2024-07-15 14:03:05.976184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:05.984883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:05.984899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:05.993440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:05.993455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.002333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.002347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.010818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.010832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.019555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.019570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.028231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.028246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.037124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.037139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.046132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.046147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.055083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.055098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.063698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.063712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.072166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.072183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.081190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.081204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.090163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.090178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.099203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.099218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.108095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.108111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.116840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.116856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.126114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.126128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.134545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.134561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.143454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.143469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.152149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.152164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.161146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.131 [2024-07-15 14:03:06.161162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.131 [2024-07-15 14:03:06.170052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.170068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.179116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.179132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.188282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.188298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.196840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.196854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.205812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.205827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.214444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.214458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.223342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.223356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.232227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.232242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.132 [2024-07-15 14:03:06.240794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.132 [2024-07-15 14:03:06.240812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.249183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.249198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.258220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.258235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.267147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.267162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.275955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.275970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.284350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.284365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.293427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.293442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.302372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.302387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.310980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.310995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.319920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.319935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.328959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.328974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.338223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.338238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.346907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.346923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.356213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.356228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.364647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.364662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.373179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.373194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.381689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.381704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.390504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.390519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.398880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.398895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.407850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.407865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.416286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.416300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.424948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.424963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.433481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.433496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.441891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.441906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.450685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.450700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.459160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.459175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.468374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.468389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.477279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.477295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.485923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.485938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.495237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.495252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.393 [2024-07-15 14:03:06.504218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.393 [2024-07-15 14:03:06.504233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.512701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.512716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.521843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.521859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.530351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.530365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.539047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.539063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.548208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.548222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.556698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.556714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.565176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.565191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.573784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.573799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.582930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.582944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.591331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.591346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.599762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.599777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.608131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.608146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.616668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.616683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.625711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.625727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.634593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.634608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.643222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.643237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.651606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.651622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.660244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.660260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.668702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.668717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.677635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.677651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.686132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.686147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.695317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.695333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.703766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.703782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.712475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.712490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.720767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.720782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.729117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.729131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.737773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.737788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.746338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.746354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.755036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.755052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.655 [2024-07-15 14:03:06.763888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.655 [2024-07-15 14:03:06.763903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.772346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.772361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.780976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.780992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.789680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.789696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.798605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.798620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.807669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.807685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.816643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.816661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.825100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.825116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.833782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.833797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.843214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.843230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.860232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.860248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.867838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.867853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.877255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.877271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.885937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.885952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.894416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.894431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.903629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.903645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.912150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.912166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.921383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.921398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.930344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.930359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.939111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.939128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.947600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.947616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.956682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.956697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.965654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.965669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.974745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.974765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.983796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.983811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:06.992802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:06.992818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:07.001876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:07.001891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:07.010496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:07.010511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:07.018873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:07.018888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.917 [2024-07-15 14:03:07.027743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.917 [2024-07-15 14:03:07.027763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.037148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.037164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.046140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.046154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.054608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.054624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.063281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.063296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.072211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.072230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.081568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.081584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.090079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.090094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.099034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.099049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.108163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.108178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.117232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.117247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.126384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.126399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.134906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.134922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.143839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.143854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.152664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.152679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.161439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.161454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.170385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.170400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.179716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.179 [2024-07-15 14:03:07.179731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.179 [2024-07-15 14:03:07.188142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.188157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.197262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.197278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.205764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.205780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.214905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.214920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.223605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.223621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.232259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.232274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.240456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.240475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.249242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.249257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.258254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.258270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.266896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.266911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.275912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.275927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.180 [2024-07-15 14:03:07.284654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.180 [2024-07-15 14:03:07.284670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.293783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.293799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.302091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.302106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.311292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.311308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.320048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.320063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.329061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.329077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.337508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.337523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.346260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.346275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.354841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.441 [2024-07-15 14:03:07.354856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.441 [2024-07-15 14:03:07.363904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.363919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.371733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.371748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.380532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.380547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.389512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.389526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.398168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.398183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.407254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.407273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.416109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.416124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.425216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.425231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.433606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.433621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.441969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.441984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.450666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.450681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.459116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.459131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.467919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.467935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.476579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.476594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.485454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.485470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.494117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.494133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.502871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.502886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.511916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.511931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.521061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.521076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.529524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.529539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.538623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.538639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.442 [2024-07-15 14:03:07.547065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.442 [2024-07-15 14:03:07.547080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.556146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.556161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.564591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.564605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.573808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.573827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.582743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.582761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.591604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.591618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.599831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.599846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.608438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.608453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.617415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.617429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.626037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.626052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.635017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.635032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.643310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.643325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.652265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.652281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.661080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.661096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.670141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.670156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.678565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.678580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.687200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.687215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.696251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.696266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.705012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.705027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.713261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.713276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.722392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.722407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.730912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.730928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.739936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.739952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.748773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.748788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.757509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.757524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.766419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.766435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.775529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.775544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.783249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.783265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.792377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.792392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.801149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.801165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.704 [2024-07-15 14:03:07.809992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.704 [2024-07-15 14:03:07.810008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.818914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.818930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.827435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.827451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.836676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.836691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.845598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.845613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.854533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.854549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.862822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.862838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.871491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.871506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.880294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.880309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.888981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.888996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.897838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.897853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.906821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.906836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.915609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.915624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.924575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.924590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.933335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.933349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.942092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.942107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.950749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.950767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.959411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.959426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.968072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.968088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.975931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.975945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.984860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.984874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:07.994037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:07.994051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.002369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.002383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.011492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.011507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.020174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.020188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.029206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.029221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.038056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.038071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.046866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.046881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.055419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.055434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.064205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.064220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.966 [2024-07-15 14:03:08.072878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.966 [2024-07-15 14:03:08.072893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.081859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.081875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.090937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.090952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.099063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.099078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.107697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.107712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.115910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.115925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.124822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.124837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.133707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.133722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.142141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.142156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.151005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.151020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.160309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.160323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.168749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.168770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.177954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.177969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.186386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.186401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.195680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.195695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.204032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.204047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.213213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.213228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.221654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.221670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.230603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.230618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.239660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.239675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.248309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.248324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.257233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.257248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.265999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.266014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.274974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.274989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.283667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.283681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.292469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.292484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.301112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.301127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.309765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.309781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.318234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.318249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.326868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.326883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.229 [2024-07-15 14:03:08.335613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.229 [2024-07-15 14:03:08.335629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.344357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.344372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.353490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.353505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.362343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.362357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.370607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.370623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.379181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.379196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.388120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.388135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.396138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.396156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.404676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.404692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.413690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.413705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.422621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.422636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.431677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.431693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.437826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.437841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 00:17:10.491 Latency(us) 00:17:10.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.491 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:10.491 Nvme1n1 : 5.00 19536.53 152.63 0.00 0.00 6545.33 2826.24 13489.49 00:17:10.491 =================================================================================================================== 00:17:10.491 Total : 19536.53 152.63 0.00 0.00 6545.33 2826.24 13489.49 00:17:10.491 [2024-07-15 14:03:08.445845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.445858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.453862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.453872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.461885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.461895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.469907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.469918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.477925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.477936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.485942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.485952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.493962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.493970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.501983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.501991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.510003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.510012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.518025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.518032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.526046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.526061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.534068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.534078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.542086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.542094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.550108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.550118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.558129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.558139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 [2024-07-15 14:03:08.566150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.491 [2024-07-15 14:03:08.566158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1338659) - No such process 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1338659 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.491 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:10.491 delay0 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.492 14:03:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:10.753 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.753 [2024-07-15 14:03:08.694272] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:17.342 Initializing NVMe Controllers 00:17:17.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:17.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:17.342 Initialization complete. Launching workers. 00:17:17.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 22635 00:17:17.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22752, failed to submit 124 00:17:17.342 success 22700, unsuccess 52, failed 0 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.342 rmmod nvme_tcp 00:17:17.342 rmmod nvme_fabrics 00:17:17.342 rmmod nvme_keyring 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1336329 ']' 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1336329 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1336329 ']' 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1336329 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1336329 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1336329' 00:17:17.342 killing process with pid 1336329 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1336329 00:17:17.342 14:03:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1336329 00:17:17.342 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.343 14:03:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.257 14:03:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.257 00:17:19.257 real 0m34.127s 00:17:19.257 user 0m44.232s 00:17:19.257 sys 0m11.014s 00:17:19.257 14:03:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.257 14:03:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 ************************************ 00:17:19.257 END TEST nvmf_zcopy 00:17:19.257 ************************************ 00:17:19.257 14:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.257 14:03:17 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:19.257 14:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.257 14:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.257 14:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 ************************************ 00:17:19.257 START TEST nvmf_nmic 00:17:19.257 ************************************ 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:19.257 * Looking for test storage... 00:17:19.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:19.257 14:03:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.440 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:27.441 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:27.441 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:27.441 Found net devices under 0000:31:00.0: cvl_0_0 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:27.441 Found net devices under 0000:31:00.1: cvl_0_1 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:17:27.441 00:17:27.441 --- 10.0.0.2 ping statistics --- 00:17:27.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.441 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:17:27.441 00:17:27.441 --- 10.0.0.1 ping statistics --- 00:17:27.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.441 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1346141 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1346141 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1346141 ']' 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.441 14:03:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 [2024-07-15 14:03:25.473048] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:27.441 [2024-07-15 14:03:25.473109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.441 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.441 [2024-07-15 14:03:25.552904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.702 [2024-07-15 14:03:25.628861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.702 [2024-07-15 14:03:25.628899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.702 [2024-07-15 14:03:25.628907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.702 [2024-07-15 14:03:25.628913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.702 [2024-07-15 14:03:25.628919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.702 [2024-07-15 14:03:25.629056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.702 [2024-07-15 14:03:25.629257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.702 [2024-07-15 14:03:25.629414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.702 [2024-07-15 14:03:25.629414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 [2024-07-15 14:03:26.314469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 Malloc0 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.275 [2024-07-15 14:03:26.373727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:28.275 test case1: single bdev can't be used in multiple subsystems 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.275 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 [2024-07-15 14:03:26.409677] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:28.536 [2024-07-15 14:03:26.409695] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:28.536 [2024-07-15 14:03:26.409703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.536 request: 00:17:28.536 { 00:17:28.536 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:28.536 "namespace": { 00:17:28.536 "bdev_name": "Malloc0", 00:17:28.536 "no_auto_visible": false 00:17:28.536 }, 00:17:28.536 "method": "nvmf_subsystem_add_ns", 00:17:28.536 "req_id": 1 00:17:28.536 } 00:17:28.536 Got JSON-RPC error response 00:17:28.536 response: 00:17:28.536 { 00:17:28.536 "code": -32602, 00:17:28.536 "message": "Invalid parameters" 00:17:28.536 } 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:28.536 Adding namespace failed - expected result. 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:28.536 test case2: host connect to nvmf target in multiple paths 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 [2024-07-15 14:03:26.421810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.536 14:03:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.922 14:03:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:31.836 14:03:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:31.836 14:03:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:31.836 14:03:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.836 14:03:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:31.836 14:03:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:33.749 14:03:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:33.749 [global] 00:17:33.749 thread=1 00:17:33.749 invalidate=1 00:17:33.749 rw=write 00:17:33.749 time_based=1 00:17:33.749 runtime=1 00:17:33.749 ioengine=libaio 00:17:33.749 direct=1 00:17:33.749 bs=4096 00:17:33.749 iodepth=1 00:17:33.749 norandommap=0 00:17:33.749 numjobs=1 00:17:33.749 00:17:33.749 verify_dump=1 00:17:33.749 verify_backlog=512 00:17:33.749 verify_state_save=0 00:17:33.749 do_verify=1 00:17:33.749 verify=crc32c-intel 00:17:33.749 [job0] 00:17:33.749 filename=/dev/nvme0n1 00:17:33.749 Could not set queue depth (nvme0n1) 00:17:33.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:33.749 fio-3.35 00:17:33.749 Starting 1 thread 00:17:35.202 00:17:35.202 job0: (groupid=0, jobs=1): err= 0: pid=1347680: Mon Jul 15 14:03:32 2024 00:17:35.202 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1039msec) 00:17:35.202 slat (nsec): min=24458, max=26123, avg=24914.53, stdev=408.03 00:17:35.202 clat (usec): min=1112, max=42949, avg=39649.14, stdev=9934.17 00:17:35.202 lat (usec): min=1136, max=42974, avg=39674.05, stdev=9934.28 00:17:35.202 clat percentiles (usec): 00:17:35.202 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41681], 20.00th=[41681], 00:17:35.202 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:35.202 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:35.202 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:35.202 | 99.99th=[42730] 00:17:35.202 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:17:35.202 slat (usec): min=9, max=26923, avg=81.36, stdev=1188.63 00:17:35.202 clat (usec): min=327, max=830, avg=621.74, stdev=96.87 00:17:35.202 lat (usec): min=339, max=27648, avg=703.10, stdev=1197.53 00:17:35.202 clat percentiles (usec): 00:17:35.202 | 1.00th=[ 388], 5.00th=[ 441], 10.00th=[ 482], 20.00th=[ 545], 00:17:35.202 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:17:35.203 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 758], 00:17:35.203 | 99.00th=[ 807], 99.50th=[ 807], 99.90th=[ 832], 99.95th=[ 832], 00:17:35.203 | 99.99th=[ 832] 00:17:35.203 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:35.203 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:35.203 lat (usec) : 500=11.53%, 750=78.07%, 1000=7.18% 00:17:35.203 lat (msec) : 2=0.19%, 50=3.02% 00:17:35.203 cpu : usr=0.67%, sys=1.45%, ctx=532, majf=0, minf=1 00:17:35.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:35.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.203 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:35.203 00:17:35.203 Run status group 0 (all jobs): 00:17:35.203 READ: bw=65.4KiB/s (67.0kB/s), 65.4KiB/s-65.4KiB/s (67.0kB/s-67.0kB/s), io=68.0KiB (69.6kB), run=1039-1039msec 00:17:35.203 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:17:35.203 00:17:35.203 Disk stats (read/write): 00:17:35.203 nvme0n1: ios=38/512, merge=0/0, ticks=1481/296, in_queue=1777, util=98.70% 00:17:35.203 14:03:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.203 rmmod nvme_tcp 00:17:35.203 rmmod nvme_fabrics 00:17:35.203 rmmod nvme_keyring 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1346141 ']' 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1346141 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1346141 ']' 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1346141 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1346141 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1346141' 00:17:35.203 killing process with pid 1346141 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1346141 00:17:35.203 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1346141 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.463 14:03:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.373 14:03:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.643 00:17:37.644 real 0m18.265s 00:17:37.644 user 0m46.432s 00:17:37.644 sys 0m6.653s 00:17:37.644 14:03:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.644 14:03:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:37.644 ************************************ 00:17:37.644 END TEST nvmf_nmic 00:17:37.644 ************************************ 00:17:37.644 14:03:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:37.644 14:03:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:37.644 14:03:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:37.644 14:03:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.644 14:03:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.644 ************************************ 00:17:37.644 START TEST nvmf_fio_target 00:17:37.644 ************************************ 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:37.644 * Looking for test storage... 00:17:37.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.644 14:03:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:45.783 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:45.783 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:45.783 Found net devices under 0000:31:00.0: cvl_0_0 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.783 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:45.784 Found net devices under 0000:31:00.1: cvl_0_1 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.784 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:17:46.045 00:17:46.045 --- 10.0.0.2 ping statistics --- 00:17:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.045 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:17:46.045 00:17:46.045 --- 10.0.0.1 ping statistics --- 00:17:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.045 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1352689 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1352689 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1352689 ']' 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.045 14:03:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 [2024-07-15 14:03:44.007221] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:46.045 [2024-07-15 14:03:44.007280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.045 [2024-07-15 14:03:44.080952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.045 [2024-07-15 14:03:44.145863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.045 [2024-07-15 14:03:44.145899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.045 [2024-07-15 14:03:44.145906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.045 [2024-07-15 14:03:44.145913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.045 [2024-07-15 14:03:44.145919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.045 [2024-07-15 14:03:44.146064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.045 [2024-07-15 14:03:44.146180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.045 [2024-07-15 14:03:44.146340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.045 [2024-07-15 14:03:44.146341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:46.990 [2024-07-15 14:03:44.952823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.990 14:03:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.249 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:47.249 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.249 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:47.249 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.510 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:47.510 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.770 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:47.770 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:47.770 14:03:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.031 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:48.031 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.292 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:48.292 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.292 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:48.292 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:48.553 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:48.812 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:48.812 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.812 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:48.812 14:03:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:49.071 14:03:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.331 [2024-07-15 14:03:47.202203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.331 14:03:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:49.331 14:03:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:49.590 14:03:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:50.972 14:03:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:53.510 14:03:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:53.510 [global] 00:17:53.510 thread=1 00:17:53.510 invalidate=1 00:17:53.510 rw=write 00:17:53.510 time_based=1 00:17:53.510 runtime=1 00:17:53.510 ioengine=libaio 00:17:53.510 direct=1 00:17:53.510 bs=4096 00:17:53.510 iodepth=1 00:17:53.510 norandommap=0 00:17:53.510 numjobs=1 00:17:53.510 00:17:53.510 verify_dump=1 00:17:53.510 verify_backlog=512 00:17:53.510 verify_state_save=0 00:17:53.510 do_verify=1 00:17:53.510 verify=crc32c-intel 00:17:53.510 [job0] 00:17:53.510 filename=/dev/nvme0n1 00:17:53.510 [job1] 00:17:53.510 filename=/dev/nvme0n2 00:17:53.510 [job2] 00:17:53.510 filename=/dev/nvme0n3 00:17:53.510 [job3] 00:17:53.510 filename=/dev/nvme0n4 00:17:53.510 Could not set queue depth (nvme0n1) 00:17:53.510 Could not set queue depth (nvme0n2) 00:17:53.510 Could not set queue depth (nvme0n3) 00:17:53.510 Could not set queue depth (nvme0n4) 00:17:53.510 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:53.510 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:53.510 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:53.510 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:53.510 fio-3.35 00:17:53.510 Starting 4 threads 00:17:54.894 00:17:54.894 job0: (groupid=0, jobs=1): err= 0: pid=1354283: Mon Jul 15 14:03:52 2024 00:17:54.894 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:54.894 slat (nsec): min=6843, max=63810, avg=23705.68, stdev=7172.71 00:17:54.894 clat (usec): min=434, max=41139, avg=917.47, stdev=1787.79 00:17:54.894 lat (usec): min=443, max=41165, avg=941.18, stdev=1787.99 00:17:54.894 clat percentiles (usec): 00:17:54.894 | 1.00th=[ 553], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 750], 00:17:54.894 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 824], 00:17:54.894 | 70.00th=[ 857], 80.00th=[ 979], 90.00th=[ 1037], 95.00th=[ 1057], 00:17:54.894 | 99.00th=[ 1467], 99.50th=[ 1532], 99.90th=[41157], 99.95th=[41157], 00:17:54.894 | 99.99th=[41157] 00:17:54.894 write: IOPS=993, BW=3972KiB/s (4067kB/s)(3976KiB/1001msec); 0 zone resets 00:17:54.894 slat (nsec): min=9575, max=54713, avg=28285.76, stdev=10468.63 00:17:54.894 clat (usec): min=222, max=893, avg=482.50, stdev=95.85 00:17:54.894 lat (usec): min=256, max=908, avg=510.79, stdev=97.53 00:17:54.894 clat percentiles (usec): 00:17:54.894 | 1.00th=[ 269], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 396], 00:17:54.894 | 30.00th=[ 437], 40.00th=[ 469], 50.00th=[ 490], 60.00th=[ 506], 00:17:54.894 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 635], 00:17:54.894 | 99.00th=[ 742], 99.50th=[ 816], 99.90th=[ 898], 99.95th=[ 898], 00:17:54.894 | 99.99th=[ 898] 00:17:54.894 bw ( KiB/s): min= 4096, max= 4096, per=42.09%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.894 lat (usec) : 250=0.20%, 500=37.38%, 750=35.06%, 1000=21.45% 00:17:54.894 lat (msec) : 2=5.78%, 4=0.07%, 50=0.07% 00:17:54.894 cpu : usr=2.00%, sys=4.20%, ctx=1508, majf=0, minf=1 00:17:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.894 issued rwts: total=512,994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.894 job1: (groupid=0, jobs=1): err= 0: pid=1354284: Mon Jul 15 14:03:52 2024 00:17:54.894 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:17:54.894 slat (nsec): min=25273, max=26255, avg=25762.47, stdev=281.43 00:17:54.894 clat (usec): min=40928, max=42020, avg=41302.55, stdev=447.96 00:17:54.894 lat (usec): min=40954, max=42046, avg=41328.31, stdev=447.99 00:17:54.894 clat percentiles (usec): 00:17:54.894 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:54.894 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:54.894 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:54.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:54.894 | 99.99th=[42206] 00:17:54.894 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:17:54.894 slat (nsec): min=9608, max=68335, avg=28914.31, stdev=10408.21 00:17:54.894 clat (usec): min=163, max=768, avg=460.30, stdev=91.45 00:17:54.894 lat (usec): min=174, max=801, avg=489.22, stdev=96.24 00:17:54.894 clat percentiles (usec): 00:17:54.894 | 1.00th=[ 255], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 383], 00:17:54.894 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 478], 60.00th=[ 494], 00:17:54.894 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 594], 00:17:54.894 | 99.00th=[ 668], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 766], 00:17:54.894 | 99.99th=[ 766] 00:17:54.894 bw ( KiB/s): min= 4096, max= 4096, per=42.09%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.894 lat (usec) : 250=0.94%, 500=60.45%, 750=34.65%, 1000=0.38% 00:17:54.894 lat (msec) : 50=3.58% 00:17:54.894 cpu : usr=0.58%, sys=1.44%, ctx=533, majf=0, minf=1 00:17:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.895 job2: (groupid=0, jobs=1): err= 0: pid=1354285: Mon Jul 15 14:03:52 2024 00:17:54.895 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:17:54.895 slat (nsec): min=8780, max=30273, avg=24198.96, stdev=5856.22 00:17:54.895 clat (usec): min=609, max=41972, avg=33573.06, stdev=15552.40 00:17:54.895 lat (usec): min=620, max=41998, avg=33597.26, stdev=15554.85 00:17:54.895 clat percentiles (usec): 00:17:54.895 | 1.00th=[ 611], 5.00th=[ 914], 10.00th=[ 922], 20.00th=[28443], 00:17:54.895 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:54.895 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:17:54.895 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:54.895 | 99.99th=[42206] 00:17:54.895 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:54.895 slat (nsec): min=9727, max=51494, avg=27942.15, stdev=10688.84 00:17:54.895 clat (usec): min=145, max=857, avg=479.80, stdev=125.52 00:17:54.895 lat (usec): min=156, max=891, avg=507.75, stdev=126.43 00:17:54.895 clat percentiles (usec): 00:17:54.895 | 1.00th=[ 233], 5.00th=[ 273], 10.00th=[ 334], 20.00th=[ 371], 00:17:54.895 | 30.00th=[ 396], 40.00th=[ 441], 50.00th=[ 482], 60.00th=[ 506], 00:17:54.895 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 693], 00:17:54.895 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 857], 99.95th=[ 857], 00:17:54.895 | 99.99th=[ 857] 00:17:54.895 bw ( KiB/s): min= 4096, max= 4096, per=42.09%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.895 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.895 lat (usec) : 250=3.18%, 500=52.15%, 750=38.50%, 1000=2.43% 00:17:54.895 lat (msec) : 2=0.19%, 50=3.55% 00:17:54.895 cpu : usr=0.68%, sys=1.45%, ctx=537, majf=0, minf=1 00:17:54.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.895 job3: (groupid=0, jobs=1): err= 0: pid=1354286: Mon Jul 15 14:03:52 2024 00:17:54.895 read: IOPS=317, BW=1271KiB/s (1301kB/s)(1272KiB/1001msec) 00:17:54.895 slat (nsec): min=22783, max=60061, avg=26337.86, stdev=3135.38 00:17:54.895 clat (usec): min=571, max=41962, avg=2328.25, stdev=7009.32 00:17:54.895 lat (usec): min=606, max=41988, avg=2354.59, stdev=7009.31 00:17:54.895 clat percentiles (usec): 00:17:54.895 | 1.00th=[ 775], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1004], 00:17:54.895 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:17:54.895 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1237], 00:17:54.895 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:54.895 | 99.99th=[42206] 00:17:54.895 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:54.895 slat (nsec): min=9636, max=54332, avg=28561.63, stdev=10374.05 00:17:54.895 clat (usec): min=143, max=697, avg=449.83, stdev=91.89 00:17:54.895 lat (usec): min=153, max=731, avg=478.39, stdev=95.98 00:17:54.895 clat percentiles (usec): 00:17:54.895 | 1.00th=[ 231], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 371], 00:17:54.895 | 30.00th=[ 396], 40.00th=[ 429], 50.00th=[ 465], 60.00th=[ 486], 00:17:54.895 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:17:54.895 | 99.00th=[ 627], 99.50th=[ 676], 99.90th=[ 701], 99.95th=[ 701], 00:17:54.895 | 99.99th=[ 701] 00:17:54.895 bw ( KiB/s): min= 4096, max= 4096, per=42.09%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.895 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.895 lat (usec) : 250=1.20%, 500=38.80%, 750=21.93%, 1000=6.99% 00:17:54.895 lat (msec) : 2=29.88%, 50=1.20% 00:17:54.895 cpu : usr=1.10%, sys=2.50%, ctx=832, majf=0, minf=1 00:17:54.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.895 issued rwts: total=318,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.895 00:17:54.895 Run status group 0 (all jobs): 00:17:54.895 READ: bw=3354KiB/s (3434kB/s), 73.1KiB/s-2046KiB/s (74.8kB/s-2095kB/s), io=3488KiB (3572kB), run=1001-1040msec 00:17:54.895 WRITE: bw=9731KiB/s (9964kB/s), 1969KiB/s-3972KiB/s (2016kB/s-4067kB/s), io=9.88MiB (10.4MB), run=1001-1040msec 00:17:54.895 00:17:54.895 Disk stats (read/write): 00:17:54.895 nvme0n1: ios=561/669, merge=0/0, ticks=891/311, in_queue=1202, util=84.07% 00:17:54.895 nvme0n2: ios=63/512, merge=0/0, ticks=1333/231, in_queue=1564, util=87.96% 00:17:54.895 nvme0n3: ios=67/512, merge=0/0, ticks=670/241, in_queue=911, util=95.14% 00:17:54.895 nvme0n4: ios=235/512, merge=0/0, ticks=698/225, in_queue=923, util=97.33% 00:17:54.895 14:03:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:54.895 [global] 00:17:54.895 thread=1 00:17:54.895 invalidate=1 00:17:54.895 rw=randwrite 00:17:54.895 time_based=1 00:17:54.895 runtime=1 00:17:54.895 ioengine=libaio 00:17:54.895 direct=1 00:17:54.895 bs=4096 00:17:54.895 iodepth=1 00:17:54.895 norandommap=0 00:17:54.895 numjobs=1 00:17:54.895 00:17:54.895 verify_dump=1 00:17:54.895 verify_backlog=512 00:17:54.895 verify_state_save=0 00:17:54.895 do_verify=1 00:17:54.895 verify=crc32c-intel 00:17:54.895 [job0] 00:17:54.895 filename=/dev/nvme0n1 00:17:54.895 [job1] 00:17:54.895 filename=/dev/nvme0n2 00:17:54.895 [job2] 00:17:54.895 filename=/dev/nvme0n3 00:17:54.895 [job3] 00:17:54.895 filename=/dev/nvme0n4 00:17:54.895 Could not set queue depth (nvme0n1) 00:17:54.895 Could not set queue depth (nvme0n2) 00:17:54.895 Could not set queue depth (nvme0n3) 00:17:54.895 Could not set queue depth (nvme0n4) 00:17:55.157 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.157 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.157 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.157 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.157 fio-3.35 00:17:55.157 Starting 4 threads 00:17:56.543 00:17:56.543 job0: (groupid=0, jobs=1): err= 0: pid=1354808: Mon Jul 15 14:03:54 2024 00:17:56.543 read: IOPS=135, BW=543KiB/s (556kB/s)(548KiB/1009msec) 00:17:56.543 slat (nsec): min=6519, max=44216, avg=24136.53, stdev=6529.33 00:17:56.543 clat (usec): min=293, max=41113, avg=5911.77, stdev=13689.91 00:17:56.543 lat (usec): min=300, max=41138, avg=5935.91, stdev=13690.26 00:17:56.543 clat percentiles (usec): 00:17:56.543 | 1.00th=[ 314], 5.00th=[ 416], 10.00th=[ 457], 20.00th=[ 529], 00:17:56.543 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 668], 00:17:56.543 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[41157], 95.00th=[41157], 00:17:56.543 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:56.543 | 99.99th=[41157] 00:17:56.543 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:17:56.544 slat (nsec): min=8800, max=48867, avg=24094.09, stdev=10099.75 00:17:56.544 clat (usec): min=137, max=732, avg=349.51, stdev=81.16 00:17:56.544 lat (usec): min=159, max=761, avg=373.60, stdev=82.96 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 212], 5.00th=[ 233], 10.00th=[ 247], 20.00th=[ 277], 00:17:56.544 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 363], 00:17:56.544 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 453], 95.00th=[ 498], 00:17:56.544 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 734], 99.95th=[ 734], 00:17:56.544 | 99.99th=[ 734] 00:17:56.544 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.544 lat (usec) : 250=8.63%, 500=70.26%, 750=18.18%, 1000=0.15% 00:17:56.544 lat (msec) : 50=2.77% 00:17:56.544 cpu : usr=0.40%, sys=1.98%, ctx=649, majf=0, minf=1 00:17:56.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 issued rwts: total=137,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.544 job1: (groupid=0, jobs=1): err= 0: pid=1354809: Mon Jul 15 14:03:54 2024 00:17:56.544 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:17:56.544 slat (nsec): min=10324, max=25903, avg=24569.18, stdev=3680.40 00:17:56.544 clat (usec): min=41535, max=42029, avg=41942.08, stdev=111.13 00:17:56.544 lat (usec): min=41545, max=42055, avg=41966.65, stdev=114.56 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:56.544 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:56.544 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:56.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.544 | 99.99th=[42206] 00:17:56.544 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:56.544 slat (nsec): min=9322, max=50047, avg=26793.30, stdev=9927.68 00:17:56.544 clat (usec): min=226, max=994, avg=550.90, stdev=138.28 00:17:56.544 lat (usec): min=256, max=1005, avg=577.69, stdev=141.02 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 258], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 420], 00:17:56.544 | 30.00th=[ 482], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 603], 00:17:56.544 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 791], 00:17:56.544 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 996], 00:17:56.544 | 99.99th=[ 996] 00:17:56.544 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.544 lat (usec) : 250=0.76%, 500=31.95%, 750=57.09%, 1000=6.99% 00:17:56.544 lat (msec) : 50=3.21% 00:17:56.544 cpu : usr=0.69%, sys=1.38%, ctx=530, majf=0, minf=1 00:17:56.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.544 job2: (groupid=0, jobs=1): err= 0: pid=1354810: Mon Jul 15 14:03:54 2024 00:17:56.544 read: IOPS=274, BW=1099KiB/s (1125kB/s)(1144KiB/1041msec) 00:17:56.544 slat (nsec): min=7095, max=59564, avg=25058.92, stdev=3869.37 00:17:56.544 clat (usec): min=762, max=42020, avg=2415.94, stdev=7134.04 00:17:56.544 lat (usec): min=791, max=42045, avg=2441.00, stdev=7133.93 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 832], 5.00th=[ 988], 10.00th=[ 1029], 20.00th=[ 1074], 00:17:56.544 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:17:56.544 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1221], 95.00th=[ 1254], 00:17:56.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.544 | 99.99th=[42206] 00:17:56.544 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:56.544 slat (nsec): min=9155, max=56405, avg=26867.30, stdev=8490.88 00:17:56.544 clat (usec): min=277, max=975, avg=630.16, stdev=130.08 00:17:56.544 lat (usec): min=290, max=1005, avg=657.03, stdev=133.13 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 293], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 515], 00:17:56.544 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 668], 00:17:56.544 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:17:56.544 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 979], 99.95th=[ 979], 00:17:56.544 | 99.99th=[ 979] 00:17:56.544 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.544 lat (usec) : 500=9.90%, 750=43.36%, 1000=12.91% 00:17:56.544 lat (msec) : 2=32.71%, 50=1.13% 00:17:56.544 cpu : usr=0.77%, sys=2.50%, ctx=798, majf=0, minf=1 00:17:56.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 issued rwts: total=286,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.544 job3: (groupid=0, jobs=1): err= 0: pid=1354811: Mon Jul 15 14:03:54 2024 00:17:56.544 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:17:56.544 slat (nsec): min=10946, max=27415, avg=26112.47, stdev=3914.34 00:17:56.544 clat (usec): min=1257, max=42153, avg=39529.41, stdev=9865.17 00:17:56.544 lat (usec): min=1268, max=42179, avg=39555.52, stdev=9869.07 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 1254], 5.00th=[ 1254], 10.00th=[41157], 20.00th=[41681], 00:17:56.544 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:56.544 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:56.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.544 | 99.99th=[42206] 00:17:56.544 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:56.544 slat (usec): min=9, max=103, avg=27.75, stdev=13.32 00:17:56.544 clat (usec): min=271, max=1576, avg=629.36, stdev=153.14 00:17:56.544 lat (usec): min=282, max=1608, avg=657.11, stdev=158.59 00:17:56.544 clat percentiles (usec): 00:17:56.544 | 1.00th=[ 310], 5.00th=[ 375], 10.00th=[ 416], 20.00th=[ 502], 00:17:56.544 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:17:56.544 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 865], 00:17:56.544 | 99.00th=[ 979], 99.50th=[ 1037], 99.90th=[ 1582], 99.95th=[ 1582], 00:17:56.544 | 99.99th=[ 1582] 00:17:56.544 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.544 lat (usec) : 500=19.09%, 750=57.66%, 1000=19.28% 00:17:56.544 lat (msec) : 2=0.95%, 50=3.02% 00:17:56.544 cpu : usr=0.59%, sys=1.68%, ctx=533, majf=0, minf=1 00:17:56.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.544 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.544 00:17:56.544 Run status group 0 (all jobs): 00:17:56.544 READ: bw=1756KiB/s (1798kB/s), 67.1KiB/s-1099KiB/s (68.7kB/s-1125kB/s), io=1828KiB (1872kB), run=1009-1041msec 00:17:56.544 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2030KiB/s (2015kB/s-2078kB/s), io=8192KiB (8389kB), run=1009-1041msec 00:17:56.544 00:17:56.544 Disk stats (read/write): 00:17:56.544 nvme0n1: ios=182/512, merge=0/0, ticks=648/171, in_queue=819, util=87.98% 00:17:56.544 nvme0n2: ios=62/512, merge=0/0, ticks=862/284, in_queue=1146, util=96.74% 00:17:56.544 nvme0n3: ios=281/512, merge=0/0, ticks=472/312, in_queue=784, util=88.50% 00:17:56.544 nvme0n4: ios=41/512, merge=0/0, ticks=866/306, in_queue=1172, util=98.61% 00:17:56.544 14:03:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:56.544 [global] 00:17:56.544 thread=1 00:17:56.544 invalidate=1 00:17:56.544 rw=write 00:17:56.544 time_based=1 00:17:56.544 runtime=1 00:17:56.544 ioengine=libaio 00:17:56.544 direct=1 00:17:56.544 bs=4096 00:17:56.544 iodepth=128 00:17:56.544 norandommap=0 00:17:56.544 numjobs=1 00:17:56.544 00:17:56.544 verify_dump=1 00:17:56.544 verify_backlog=512 00:17:56.544 verify_state_save=0 00:17:56.544 do_verify=1 00:17:56.544 verify=crc32c-intel 00:17:56.544 [job0] 00:17:56.544 filename=/dev/nvme0n1 00:17:56.544 [job1] 00:17:56.544 filename=/dev/nvme0n2 00:17:56.544 [job2] 00:17:56.544 filename=/dev/nvme0n3 00:17:56.544 [job3] 00:17:56.544 filename=/dev/nvme0n4 00:17:56.544 Could not set queue depth (nvme0n1) 00:17:56.544 Could not set queue depth (nvme0n2) 00:17:56.544 Could not set queue depth (nvme0n3) 00:17:56.544 Could not set queue depth (nvme0n4) 00:17:56.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.806 fio-3.35 00:17:56.806 Starting 4 threads 00:17:58.194 00:17:58.194 job0: (groupid=0, jobs=1): err= 0: pid=1355337: Mon Jul 15 14:03:56 2024 00:17:58.194 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:17:58.194 slat (nsec): min=881, max=20847k, avg=120131.16, stdev=1018241.85 00:17:58.194 clat (usec): min=1430, max=48191, avg=15404.88, stdev=7617.30 00:17:58.194 lat (usec): min=1438, max=48199, avg=15525.01, stdev=7717.47 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 4424], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 8160], 00:17:58.194 | 30.00th=[ 8586], 40.00th=[13042], 50.00th=[14484], 60.00th=[16712], 00:17:58.194 | 70.00th=[18482], 80.00th=[22414], 90.00th=[25822], 95.00th=[28705], 00:17:58.194 | 99.00th=[39584], 99.50th=[44827], 99.90th=[47973], 99.95th=[47973], 00:17:58.194 | 99.99th=[47973] 00:17:58.194 write: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1018msec); 0 zone resets 00:17:58.194 slat (nsec): min=1587, max=17131k, avg=164163.74, stdev=1043157.24 00:17:58.194 clat (usec): min=414, max=113978, avg=22874.92, stdev=25912.96 00:17:58.194 lat (usec): min=423, max=113986, avg=23039.08, stdev=26095.85 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 865], 5.00th=[ 2089], 10.00th=[ 4178], 20.00th=[ 6390], 00:17:58.194 | 30.00th=[ 7177], 40.00th=[ 9372], 50.00th=[ 12911], 60.00th=[ 16450], 00:17:58.194 | 70.00th=[ 20055], 80.00th=[ 38011], 90.00th=[ 62129], 95.00th=[ 93848], 00:17:58.194 | 99.00th=[110625], 99.50th=[112722], 99.90th=[113771], 99.95th=[113771], 00:17:58.194 | 99.99th=[113771] 00:17:58.194 bw ( KiB/s): min= 7120, max=20480, per=16.44%, avg=13800.00, stdev=9446.95, samples=2 00:17:58.194 iops : min= 1780, max= 5120, avg=3450.00, stdev=2361.74, samples=2 00:17:58.194 lat (usec) : 500=0.05%, 750=0.26%, 1000=0.44% 00:17:58.194 lat (msec) : 2=1.73%, 4=2.06%, 10=34.04%, 20=34.53%, 50=19.91% 00:17:58.194 lat (msec) : 100=5.19%, 250=1.80% 00:17:58.194 cpu : usr=3.64%, sys=3.05%, ctx=273, majf=0, minf=1 00:17:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:58.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.194 issued rwts: total=3072,3577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.194 job1: (groupid=0, jobs=1): err= 0: pid=1355338: Mon Jul 15 14:03:56 2024 00:17:58.194 read: IOPS=9315, BW=36.4MiB/s (38.2MB/s)(36.5MiB/1003msec) 00:17:58.194 slat (nsec): min=939, max=8416.0k, avg=46590.12, stdev=348995.26 00:17:58.194 clat (usec): min=1332, max=21062, avg=6305.74, stdev=1988.57 00:17:58.194 lat (usec): min=2130, max=22658, avg=6352.33, stdev=2012.38 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 3392], 5.00th=[ 4178], 10.00th=[ 4424], 20.00th=[ 4948], 00:17:58.194 | 30.00th=[ 5211], 40.00th=[ 5538], 50.00th=[ 5866], 60.00th=[ 6128], 00:17:58.194 | 70.00th=[ 6652], 80.00th=[ 7504], 90.00th=[ 8586], 95.00th=[ 9896], 00:17:58.194 | 99.00th=[14222], 99.50th=[14484], 99.90th=[17433], 99.95th=[20055], 00:17:58.194 | 99.99th=[21103] 00:17:58.194 write: IOPS=9698, BW=37.9MiB/s (39.7MB/s)(38.0MiB/1003msec); 0 zone resets 00:17:58.194 slat (nsec): min=1588, max=25101k, avg=54831.46, stdev=462861.91 00:17:58.194 clat (usec): min=1490, max=58073, avg=6667.67, stdev=8040.69 00:17:58.194 lat (usec): min=1507, max=58082, avg=6722.50, stdev=8098.96 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 1975], 5.00th=[ 2868], 10.00th=[ 3195], 20.00th=[ 3589], 00:17:58.194 | 30.00th=[ 4293], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5342], 00:17:58.194 | 70.00th=[ 5669], 80.00th=[ 6390], 90.00th=[ 7570], 95.00th=[11076], 00:17:58.194 | 99.00th=[53216], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:17:58.194 | 99.99th=[57934] 00:17:58.194 bw ( KiB/s): min=28672, max=49144, per=46.35%, avg=38908.00, stdev=14475.89, samples=2 00:17:58.194 iops : min= 7168, max=12286, avg=9727.00, stdev=3618.97, samples=2 00:17:58.194 lat (msec) : 2=0.56%, 4=13.96%, 10=80.51%, 20=2.85%, 50=1.39% 00:17:58.194 lat (msec) : 100=0.73% 00:17:58.194 cpu : usr=4.29%, sys=6.79%, ctx=593, majf=0, minf=1 00:17:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:58.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.194 issued rwts: total=9343,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.194 job2: (groupid=0, jobs=1): err= 0: pid=1355339: Mon Jul 15 14:03:56 2024 00:17:58.194 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:17:58.194 slat (nsec): min=918, max=27197k, avg=133490.02, stdev=1064065.20 00:17:58.194 clat (usec): min=6420, max=75124, avg=17024.62, stdev=12682.92 00:17:58.194 lat (usec): min=6422, max=75151, avg=17158.11, stdev=12797.54 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 6980], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 8848], 00:17:58.194 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[15139], 00:17:58.194 | 70.00th=[16909], 80.00th=[23725], 90.00th=[34866], 95.00th=[49021], 00:17:58.194 | 99.00th=[69731], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:17:58.194 | 99.99th=[74974] 00:17:58.194 write: IOPS=3710, BW=14.5MiB/s (15.2MB/s)(14.7MiB/1011msec); 0 zone resets 00:17:58.194 slat (nsec): min=1615, max=32298k, avg=134999.30, stdev=1198227.72 00:17:58.194 clat (usec): min=5199, max=86075, avg=17465.43, stdev=13863.43 00:17:58.194 lat (usec): min=5208, max=86107, avg=17600.43, stdev=13995.65 00:17:58.194 clat percentiles (usec): 00:17:58.194 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7832], 00:17:58.194 | 30.00th=[ 8094], 40.00th=[ 9896], 50.00th=[12518], 60.00th=[14091], 00:17:58.194 | 70.00th=[18220], 80.00th=[26870], 90.00th=[35390], 95.00th=[58459], 00:17:58.194 | 99.00th=[68682], 99.50th=[68682], 99.90th=[74974], 99.95th=[86508], 00:17:58.194 | 99.99th=[86508] 00:17:58.194 bw ( KiB/s): min= 8520, max=20472, per=17.27%, avg=14496.00, stdev=8451.34, samples=2 00:17:58.194 iops : min= 2130, max= 5118, avg=3624.00, stdev=2112.84, samples=2 00:17:58.194 lat (msec) : 10=45.19%, 20=28.64%, 50=20.94%, 100=5.22% 00:17:58.194 cpu : usr=1.88%, sys=3.07%, ctx=419, majf=0, minf=1 00:17:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.195 issued rwts: total=3584,3751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.195 job3: (groupid=0, jobs=1): err= 0: pid=1355340: Mon Jul 15 14:03:56 2024 00:17:58.195 read: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1018msec) 00:17:58.195 slat (nsec): min=1408, max=13694k, avg=115836.30, stdev=876684.14 00:17:58.195 clat (usec): min=3151, max=49276, avg=14825.57, stdev=5311.69 00:17:58.195 lat (usec): min=3156, max=49281, avg=14941.41, stdev=5394.93 00:17:58.195 clat percentiles (usec): 00:17:58.195 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10421], 00:17:58.195 | 30.00th=[11731], 40.00th=[13566], 50.00th=[14615], 60.00th=[15139], 00:17:58.195 | 70.00th=[16057], 80.00th=[18220], 90.00th=[20579], 95.00th=[22938], 00:17:58.195 | 99.00th=[36439], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:17:58.195 | 99.99th=[49021] 00:17:58.195 write: IOPS=4233, BW=16.5MiB/s (17.3MB/s)(16.8MiB/1018msec); 0 zone resets 00:17:58.195 slat (nsec): min=1751, max=15811k, avg=115273.13, stdev=815187.71 00:17:58.195 clat (usec): min=1135, max=80142, avg=15764.56, stdev=13570.64 00:17:58.195 lat (usec): min=1148, max=80183, avg=15879.84, stdev=13664.44 00:17:58.195 clat percentiles (usec): 00:17:58.195 | 1.00th=[ 4490], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 8029], 00:17:58.195 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10290], 60.00th=[12649], 00:17:58.195 | 70.00th=[14353], 80.00th=[18744], 90.00th=[33817], 95.00th=[44827], 00:17:58.195 | 99.00th=[74974], 99.50th=[76022], 99.90th=[80217], 99.95th=[80217], 00:17:58.195 | 99.99th=[80217] 00:17:58.195 bw ( KiB/s): min=12976, max=20480, per=19.93%, avg=16728.00, stdev=5306.13, samples=2 00:17:58.195 iops : min= 3244, max= 5120, avg=4182.00, stdev=1326.53, samples=2 00:17:58.195 lat (msec) : 2=0.12%, 4=0.17%, 10=27.52%, 20=57.11%, 50=13.12% 00:17:58.195 lat (msec) : 100=1.96% 00:17:58.195 cpu : usr=3.05%, sys=5.31%, ctx=236, majf=0, minf=1 00:17:58.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:58.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.195 issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.195 00:17:58.195 Run status group 0 (all jobs): 00:17:58.195 READ: bw=77.1MiB/s (80.9MB/s), 11.8MiB/s-36.4MiB/s (12.4MB/s-38.2MB/s), io=78.5MiB (82.3MB), run=1003-1018msec 00:17:58.195 WRITE: bw=82.0MiB/s (86.0MB/s), 13.7MiB/s-37.9MiB/s (14.4MB/s-39.7MB/s), io=83.5MiB (87.5MB), run=1003-1018msec 00:17:58.195 00:17:58.195 Disk stats (read/write): 00:17:58.195 nvme0n1: ios=2861/3072, merge=0/0, ticks=41243/57476, in_queue=98719, util=89.68% 00:17:58.195 nvme0n2: ios=7494/7680, merge=0/0, ticks=46425/52787, in_queue=99212, util=94.09% 00:17:58.195 nvme0n3: ios=3092/3172, merge=0/0, ticks=23231/21989, in_queue=45220, util=100.00% 00:17:58.195 nvme0n4: ios=3624/3855, merge=0/0, ticks=50102/46555, in_queue=96657, util=100.00% 00:17:58.195 14:03:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:58.195 [global] 00:17:58.195 thread=1 00:17:58.195 invalidate=1 00:17:58.195 rw=randwrite 00:17:58.195 time_based=1 00:17:58.195 runtime=1 00:17:58.195 ioengine=libaio 00:17:58.195 direct=1 00:17:58.195 bs=4096 00:17:58.195 iodepth=128 00:17:58.195 norandommap=0 00:17:58.195 numjobs=1 00:17:58.195 00:17:58.195 verify_dump=1 00:17:58.195 verify_backlog=512 00:17:58.195 verify_state_save=0 00:17:58.195 do_verify=1 00:17:58.195 verify=crc32c-intel 00:17:58.195 [job0] 00:17:58.195 filename=/dev/nvme0n1 00:17:58.195 [job1] 00:17:58.195 filename=/dev/nvme0n2 00:17:58.195 [job2] 00:17:58.195 filename=/dev/nvme0n3 00:17:58.195 [job3] 00:17:58.195 filename=/dev/nvme0n4 00:17:58.195 Could not set queue depth (nvme0n1) 00:17:58.195 Could not set queue depth (nvme0n2) 00:17:58.195 Could not set queue depth (nvme0n3) 00:17:58.195 Could not set queue depth (nvme0n4) 00:17:58.456 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.456 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.456 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.456 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.456 fio-3.35 00:17:58.456 Starting 4 threads 00:17:59.843 00:17:59.843 job0: (groupid=0, jobs=1): err= 0: pid=1355856: Mon Jul 15 14:03:57 2024 00:17:59.843 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:17:59.843 slat (nsec): min=900, max=20020k, avg=64637.61, stdev=574233.44 00:17:59.843 clat (usec): min=2235, max=42975, avg=8381.29, stdev=3847.07 00:17:59.843 lat (usec): min=2241, max=42981, avg=8445.92, stdev=3890.40 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6587], 00:17:59.843 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7963], 00:17:59.843 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[12911], 00:17:59.843 | 99.00th=[29754], 99.50th=[31851], 99.90th=[42730], 99.95th=[42730], 00:17:59.843 | 99.99th=[42730] 00:17:59.843 write: IOPS=7839, BW=30.6MiB/s (32.1MB/s)(30.8MiB/1007msec); 0 zone resets 00:17:59.843 slat (nsec): min=1585, max=16754k, avg=54555.70, stdev=463466.03 00:17:59.843 clat (usec): min=651, max=57717, avg=8033.58, stdev=6620.52 00:17:59.843 lat (usec): min=659, max=57732, avg=8088.14, stdev=6664.18 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 2212], 5.00th=[ 3359], 10.00th=[ 3982], 20.00th=[ 4752], 00:17:59.843 | 30.00th=[ 5997], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:17:59.843 | 70.00th=[ 7767], 80.00th=[ 8291], 90.00th=[10421], 95.00th=[13698], 00:17:59.843 | 99.00th=[44827], 99.50th=[50594], 99.90th=[56361], 99.95th=[56361], 00:17:59.843 | 99.99th=[57934] 00:17:59.843 bw ( KiB/s): min=28664, max=33472, per=28.98%, avg=31068.00, stdev=3399.77, samples=2 00:17:59.843 iops : min= 7166, max= 8368, avg=7767.00, stdev=849.94, samples=2 00:17:59.843 lat (usec) : 750=0.02% 00:17:59.843 lat (msec) : 2=0.39%, 4=6.00%, 10=79.69%, 20=11.17%, 50=2.46% 00:17:59.843 lat (msec) : 100=0.27% 00:17:59.843 cpu : usr=4.87%, sys=6.66%, ctx=691, majf=0, minf=1 00:17:59.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:59.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.843 issued rwts: total=7680,7894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.843 job1: (groupid=0, jobs=1): err= 0: pid=1355857: Mon Jul 15 14:03:57 2024 00:17:59.843 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec) 00:17:59.843 slat (nsec): min=968, max=7323.0k, avg=57096.08, stdev=420450.66 00:17:59.843 clat (usec): min=2181, max=16559, avg=7895.07, stdev=2151.75 00:17:59.843 lat (usec): min=2186, max=16583, avg=7952.16, stdev=2169.61 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 3032], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 6194], 00:17:59.843 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8094], 00:17:59.843 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[11076], 95.00th=[11863], 00:17:59.843 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14484], 99.95th=[14615], 00:17:59.843 | 99.99th=[16581] 00:17:59.843 write: IOPS=8939, BW=34.9MiB/s (36.6MB/s)(35.1MiB/1006msec); 0 zone resets 00:17:59.843 slat (nsec): min=1558, max=10373k, avg=51446.99, stdev=372916.29 00:17:59.843 clat (usec): min=1736, max=19371, avg=6547.70, stdev=2002.25 00:17:59.843 lat (usec): min=1740, max=19378, avg=6599.15, stdev=2011.53 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 3163], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4686], 00:17:59.843 | 30.00th=[ 5473], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6849], 00:17:59.843 | 70.00th=[ 7177], 80.00th=[ 7701], 90.00th=[ 8848], 95.00th=[10028], 00:17:59.843 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:17:59.843 | 99.99th=[19268] 00:17:59.843 bw ( KiB/s): min=34928, max=35992, per=33.07%, avg=35460.00, stdev=752.36, samples=2 00:17:59.843 iops : min= 8732, max= 8998, avg=8865.00, stdev=188.09, samples=2 00:17:59.843 lat (msec) : 2=0.05%, 4=4.95%, 10=83.65%, 20=11.35% 00:17:59.843 cpu : usr=6.07%, sys=8.26%, ctx=584, majf=0, minf=1 00:17:59.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:59.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.843 issued rwts: total=8704,8993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.843 job2: (groupid=0, jobs=1): err= 0: pid=1355858: Mon Jul 15 14:03:57 2024 00:17:59.843 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:17:59.843 slat (nsec): min=923, max=17435k, avg=125821.20, stdev=893890.67 00:17:59.843 clat (usec): min=5069, max=68448, avg=15921.29, stdev=8379.54 00:17:59.843 lat (usec): min=5073, max=77182, avg=16047.11, stdev=8459.82 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[11338], 20.00th=[12125], 00:17:59.843 | 30.00th=[12518], 40.00th=[13698], 50.00th=[14222], 60.00th=[14615], 00:17:59.843 | 70.00th=[15008], 80.00th=[16188], 90.00th=[20317], 95.00th=[33817], 00:17:59.843 | 99.00th=[58983], 99.50th=[64226], 99.90th=[68682], 99.95th=[68682], 00:17:59.843 | 99.99th=[68682] 00:17:59.843 write: IOPS=3938, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1006msec); 0 zone resets 00:17:59.843 slat (nsec): min=1617, max=9733.3k, avg=126432.93, stdev=771151.23 00:17:59.843 clat (usec): min=1227, max=83503, avg=17824.11, stdev=16401.13 00:17:59.843 lat (usec): min=1236, max=85557, avg=17950.54, stdev=16512.61 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 3458], 5.00th=[ 4817], 10.00th=[ 6915], 20.00th=[ 9110], 00:17:59.843 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12125], 60.00th=[13566], 00:17:59.843 | 70.00th=[16319], 80.00th=[18744], 90.00th=[30802], 95.00th=[65799], 00:17:59.843 | 99.00th=[77071], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:17:59.843 | 99.99th=[83362] 00:17:59.843 bw ( KiB/s): min=12120, max=18552, per=14.30%, avg=15336.00, stdev=4548.11, samples=2 00:17:59.843 iops : min= 3030, max= 4638, avg=3834.00, stdev=1137.03, samples=2 00:17:59.843 lat (msec) : 2=0.03%, 4=0.94%, 10=15.32%, 20=69.11%, 50=9.05% 00:17:59.843 lat (msec) : 100=5.55% 00:17:59.843 cpu : usr=2.69%, sys=4.08%, ctx=335, majf=0, minf=1 00:17:59.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:59.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.843 issued rwts: total=3584,3962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.843 job3: (groupid=0, jobs=1): err= 0: pid=1355859: Mon Jul 15 14:03:57 2024 00:17:59.843 read: IOPS=6039, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1006msec) 00:17:59.843 slat (nsec): min=938, max=16322k, avg=79066.01, stdev=608716.49 00:17:59.843 clat (usec): min=4043, max=30151, avg=10335.46, stdev=3869.77 00:17:59.843 lat (usec): min=4571, max=30156, avg=10414.53, stdev=3910.88 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 5538], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8094], 00:17:59.843 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:17:59.843 | 70.00th=[10159], 80.00th=[11600], 90.00th=[13173], 95.00th=[18482], 00:17:59.843 | 99.00th=[29230], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:17:59.843 | 99.99th=[30278] 00:17:59.843 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:17:59.843 slat (nsec): min=1638, max=18841k, avg=75312.34, stdev=571785.18 00:17:59.843 clat (usec): min=742, max=36234, avg=10514.71, stdev=5930.57 00:17:59.843 lat (usec): min=769, max=36287, avg=10590.02, stdev=5963.65 00:17:59.843 clat percentiles (usec): 00:17:59.843 | 1.00th=[ 3064], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5997], 00:17:59.843 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8979], 00:17:59.843 | 70.00th=[11994], 80.00th=[16057], 90.00th=[18482], 95.00th=[20317], 00:17:59.844 | 99.00th=[32900], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:17:59.844 | 99.99th=[36439] 00:17:59.844 bw ( KiB/s): min=24576, max=24576, per=22.92%, avg=24576.00, stdev= 0.00, samples=2 00:17:59.844 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:17:59.844 lat (usec) : 750=0.01%, 1000=0.02% 00:17:59.844 lat (msec) : 2=0.14%, 4=0.99%, 10=64.83%, 20=28.94%, 50=5.08% 00:17:59.844 cpu : usr=4.68%, sys=5.77%, ctx=403, majf=0, minf=1 00:17:59.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:59.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.844 issued rwts: total=6076,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.844 00:17:59.844 Run status group 0 (all jobs): 00:17:59.844 READ: bw=101MiB/s (106MB/s), 13.9MiB/s-33.8MiB/s (14.6MB/s-35.4MB/s), io=102MiB (107MB), run=1006-1007msec 00:17:59.844 WRITE: bw=105MiB/s (110MB/s), 15.4MiB/s-34.9MiB/s (16.1MB/s-36.6MB/s), io=105MiB (111MB), run=1006-1007msec 00:17:59.844 00:17:59.844 Disk stats (read/write): 00:17:59.844 nvme0n1: ios=6183/6519, merge=0/0, ticks=48129/45031, in_queue=93160, util=87.37% 00:17:59.844 nvme0n2: ios=7223/7411, merge=0/0, ticks=54368/46744, in_queue=101112, util=91.24% 00:17:59.844 nvme0n3: ios=3643/3631, merge=0/0, ticks=36824/37779, in_queue=74603, util=93.05% 00:17:59.844 nvme0n4: ios=4689/5120, merge=0/0, ticks=44087/50028, in_queue=94115, util=94.25% 00:17:59.844 14:03:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:59.844 14:03:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1356198 00:17:59.844 14:03:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:59.844 14:03:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:59.844 [global] 00:17:59.844 thread=1 00:17:59.844 invalidate=1 00:17:59.844 rw=read 00:17:59.844 time_based=1 00:17:59.844 runtime=10 00:17:59.844 ioengine=libaio 00:17:59.844 direct=1 00:17:59.844 bs=4096 00:17:59.844 iodepth=1 00:17:59.844 norandommap=1 00:17:59.844 numjobs=1 00:17:59.844 00:17:59.844 [job0] 00:17:59.844 filename=/dev/nvme0n1 00:17:59.844 [job1] 00:17:59.844 filename=/dev/nvme0n2 00:17:59.844 [job2] 00:17:59.844 filename=/dev/nvme0n3 00:17:59.844 [job3] 00:17:59.844 filename=/dev/nvme0n4 00:17:59.844 Could not set queue depth (nvme0n1) 00:17:59.844 Could not set queue depth (nvme0n2) 00:17:59.844 Could not set queue depth (nvme0n3) 00:17:59.844 Could not set queue depth (nvme0n4) 00:18:00.103 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:00.103 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:00.103 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:00.103 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:00.103 fio-3.35 00:18:00.103 Starting 4 threads 00:18:03.408 14:04:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:03.408 14:04:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:03.408 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2482176, buflen=4096 00:18:03.408 fio: pid=1356388, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:03.408 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=942080, buflen=4096 00:18:03.408 fio: pid=1356387, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:03.408 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11579392, buflen=4096 00:18:03.408 fio: pid=1356385, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:03.408 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7077888, buflen=4096 00:18:03.408 fio: pid=1356386, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.408 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:03.408 00:18:03.408 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1356385: Mon Jul 15 14:04:01 2024 00:18:03.408 read: IOPS=981, BW=3924KiB/s (4018kB/s)(11.0MiB/2882msec) 00:18:03.408 slat (usec): min=6, max=11695, avg=33.81, stdev=301.20 00:18:03.408 clat (usec): min=321, max=3720, avg=978.99, stdev=111.16 00:18:03.408 lat (usec): min=348, max=12780, avg=1012.81, stdev=320.41 00:18:03.408 clat percentiles (usec): 00:18:03.408 | 1.00th=[ 603], 5.00th=[ 807], 10.00th=[ 881], 20.00th=[ 930], 00:18:03.408 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:18:03.408 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:18:03.408 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1696], 99.95th=[ 2278], 00:18:03.408 | 99.99th=[ 3720] 00:18:03.408 bw ( KiB/s): min= 3840, max= 3976, per=55.74%, avg=3913.60, stdev=57.52, samples=5 00:18:03.408 iops : min= 960, max= 994, avg=978.40, stdev=14.38, samples=5 00:18:03.408 lat (usec) : 500=0.50%, 750=2.33%, 1000=55.02% 00:18:03.408 lat (msec) : 2=42.04%, 4=0.07% 00:18:03.408 cpu : usr=1.77%, sys=3.82%, ctx=2831, majf=0, minf=1 00:18:03.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.408 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1356386: Mon Jul 15 14:04:01 2024 00:18:03.408 read: IOPS=562, BW=2250KiB/s (2304kB/s)(6912KiB/3072msec) 00:18:03.408 slat (usec): min=6, max=111, avg=24.03, stdev= 4.05 00:18:03.408 clat (usec): min=383, max=42155, avg=1746.88, stdev=5417.03 00:18:03.408 lat (usec): min=407, max=42178, avg=1770.91, stdev=5417.28 00:18:03.408 clat percentiles (usec): 00:18:03.408 | 1.00th=[ 725], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 963], 00:18:03.408 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:18:03.408 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:18:03.408 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.408 | 99.99th=[42206] 00:18:03.408 bw ( KiB/s): min= 712, max= 3904, per=39.03%, avg=2740.80, stdev=1511.33, samples=5 00:18:03.408 iops : min= 178, max= 976, avg=685.20, stdev=377.83, samples=5 00:18:03.408 lat (usec) : 500=0.17%, 750=1.16%, 1000=32.16% 00:18:03.408 lat (msec) : 2=64.66%, 50=1.79% 00:18:03.408 cpu : usr=0.39%, sys=1.82%, ctx=1730, majf=0, minf=1 00:18:03.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 issued rwts: total=1729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.408 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1356387: Mon Jul 15 14:04:01 2024 00:18:03.408 read: IOPS=83, BW=334KiB/s (342kB/s)(920KiB/2754msec) 00:18:03.408 slat (usec): min=7, max=9650, avg=66.31, stdev=633.33 00:18:03.408 clat (usec): min=688, max=45070, avg=11896.13, stdev=18107.43 00:18:03.408 lat (usec): min=696, max=50960, avg=11962.60, stdev=18188.14 00:18:03.408 clat percentiles (usec): 00:18:03.408 | 1.00th=[ 742], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 971], 00:18:03.408 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1123], 00:18:03.408 | 70.00th=[ 1205], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:03.408 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:18:03.408 | 99.99th=[44827] 00:18:03.408 bw ( KiB/s): min= 96, max= 1408, per=5.10%, avg=358.40, stdev=586.74, samples=5 00:18:03.408 iops : min= 24, max= 352, avg=89.60, stdev=146.69, samples=5 00:18:03.408 lat (usec) : 750=1.30%, 1000=25.54% 00:18:03.408 lat (msec) : 2=45.89%, 4=0.43%, 50=26.41% 00:18:03.408 cpu : usr=0.11%, sys=0.25%, ctx=233, majf=0, minf=1 00:18:03.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 issued rwts: total=231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.408 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1356388: Mon Jul 15 14:04:01 2024 00:18:03.408 read: IOPS=234, BW=938KiB/s (961kB/s)(2424KiB/2583msec) 00:18:03.408 slat (nsec): min=7774, max=60293, avg=25840.99, stdev=4146.00 00:18:03.408 clat (usec): min=710, max=42097, avg=4228.77, stdev=10726.55 00:18:03.408 lat (usec): min=719, max=42122, avg=4254.62, stdev=10726.46 00:18:03.408 clat percentiles (usec): 00:18:03.408 | 1.00th=[ 947], 5.00th=[ 1029], 10.00th=[ 1074], 20.00th=[ 1106], 00:18:03.408 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:18:03.408 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[41157], 00:18:03.408 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.408 | 99.99th=[42206] 00:18:03.408 bw ( KiB/s): min= 192, max= 1496, per=13.76%, avg=966.40, stdev=545.04, samples=5 00:18:03.408 iops : min= 48, max= 374, avg=241.60, stdev=136.26, samples=5 00:18:03.408 lat (usec) : 750=0.33%, 1000=2.14% 00:18:03.408 lat (msec) : 2=89.79%, 50=7.58% 00:18:03.408 cpu : usr=0.31%, sys=0.97%, ctx=607, majf=0, minf=2 00:18:03.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.408 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.408 00:18:03.408 Run status group 0 (all jobs): 00:18:03.408 READ: bw=7020KiB/s (7188kB/s), 334KiB/s-3924KiB/s (342kB/s-4018kB/s), io=21.1MiB (22.1MB), run=2583-3072msec 00:18:03.408 00:18:03.408 Disk stats (read/write): 00:18:03.408 nvme0n1: ios=2783/0, merge=0/0, ticks=2612/0, in_queue=2612, util=94.06% 00:18:03.408 nvme0n2: ios=1722/0, merge=0/0, ticks=2706/0, in_queue=2706, util=95.36% 00:18:03.408 nvme0n3: ios=269/0, merge=0/0, ticks=3209/0, in_queue=3209, util=98.85% 00:18:03.408 nvme0n4: ios=570/0, merge=0/0, ticks=2258/0, in_queue=2258, util=96.06% 00:18:03.669 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.669 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:03.669 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.669 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:03.930 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:03.930 14:04:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1356198 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:04.191 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:04.452 nvmf hotplug test: fio failed as expected 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.452 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.713 rmmod nvme_tcp 00:18:04.713 rmmod nvme_fabrics 00:18:04.713 rmmod nvme_keyring 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1352689 ']' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1352689 ']' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1352689' 00:18:04.713 killing process with pid 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1352689 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.713 14:04:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.277 14:04:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.277 00:18:07.277 real 0m29.325s 00:18:07.277 user 2m36.634s 00:18:07.277 sys 0m9.423s 00:18:07.277 14:04:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.277 14:04:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.277 ************************************ 00:18:07.277 END TEST nvmf_fio_target 00:18:07.277 ************************************ 00:18:07.277 14:04:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:07.277 14:04:04 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:07.277 14:04:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:07.277 14:04:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.278 14:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.278 ************************************ 00:18:07.278 START TEST nvmf_bdevio 00:18:07.278 ************************************ 00:18:07.278 14:04:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:07.278 * Looking for test storage... 00:18:07.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.278 14:04:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:15.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:15.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:15.456 Found net devices under 0000:31:00.0: cvl_0_0 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:15.456 Found net devices under 0000:31:00.1: cvl_0_1 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.456 14:04:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:18:15.456 00:18:15.456 --- 10.0.0.2 ping statistics --- 00:18:15.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.456 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:18:15.456 00:18:15.456 --- 10.0.0.1 ping statistics --- 00:18:15.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.456 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1362089 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1362089 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1362089 ']' 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.456 14:04:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 [2024-07-15 14:04:13.257959] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:15.456 [2024-07-15 14:04:13.258027] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.456 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.456 [2024-07-15 14:04:13.356691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.456 [2024-07-15 14:04:13.447844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.456 [2024-07-15 14:04:13.447926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.456 [2024-07-15 14:04:13.447935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.456 [2024-07-15 14:04:13.447942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.456 [2024-07-15 14:04:13.447948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.456 [2024-07-15 14:04:13.448111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:15.456 [2024-07-15 14:04:13.448374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:15.456 [2024-07-15 14:04:13.448532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:15.456 [2024-07-15 14:04:13.448535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.028 [2024-07-15 14:04:14.110148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.028 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.289 Malloc0 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.289 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.290 [2024-07-15 14:04:14.175243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:16.290 { 00:18:16.290 "params": { 00:18:16.290 "name": "Nvme$subsystem", 00:18:16.290 "trtype": "$TEST_TRANSPORT", 00:18:16.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:16.290 "adrfam": "ipv4", 00:18:16.290 "trsvcid": "$NVMF_PORT", 00:18:16.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:16.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:16.290 "hdgst": ${hdgst:-false}, 00:18:16.290 "ddgst": ${ddgst:-false} 00:18:16.290 }, 00:18:16.290 "method": "bdev_nvme_attach_controller" 00:18:16.290 } 00:18:16.290 EOF 00:18:16.290 )") 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:16.290 14:04:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:16.290 "params": { 00:18:16.290 "name": "Nvme1", 00:18:16.290 "trtype": "tcp", 00:18:16.290 "traddr": "10.0.0.2", 00:18:16.290 "adrfam": "ipv4", 00:18:16.290 "trsvcid": "4420", 00:18:16.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.290 "hdgst": false, 00:18:16.290 "ddgst": false 00:18:16.290 }, 00:18:16.290 "method": "bdev_nvme_attach_controller" 00:18:16.290 }' 00:18:16.290 [2024-07-15 14:04:14.231871] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:16.290 [2024-07-15 14:04:14.231938] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362128 ] 00:18:16.290 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.290 [2024-07-15 14:04:14.307383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.290 [2024-07-15 14:04:14.384855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.290 [2024-07-15 14:04:14.384990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.290 [2024-07-15 14:04:14.384993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.551 I/O targets: 00:18:16.551 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:16.551 00:18:16.551 00:18:16.551 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.551 http://cunit.sourceforge.net/ 00:18:16.551 00:18:16.551 00:18:16.551 Suite: bdevio tests on: Nvme1n1 00:18:16.811 Test: blockdev write read block ...passed 00:18:16.811 Test: blockdev write zeroes read block ...passed 00:18:16.811 Test: blockdev write zeroes read no split ...passed 00:18:16.811 Test: blockdev write zeroes read split ...passed 00:18:16.811 Test: blockdev write zeroes read split partial ...passed 00:18:16.811 Test: blockdev reset ...[2024-07-15 14:04:14.815143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.811 [2024-07-15 14:04:14.815203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfac370 (9): Bad file descriptor 00:18:16.811 [2024-07-15 14:04:14.835445] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.811 passed 00:18:16.811 Test: blockdev write read 8 blocks ...passed 00:18:16.811 Test: blockdev write read size > 128k ...passed 00:18:16.812 Test: blockdev write read invalid size ...passed 00:18:16.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:16.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:16.812 Test: blockdev write read max offset ...passed 00:18:17.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:17.073 Test: blockdev writev readv 8 blocks ...passed 00:18:17.073 Test: blockdev writev readv 30 x 1block ...passed 00:18:17.073 Test: blockdev writev readv block ...passed 00:18:17.073 Test: blockdev writev readv size > 128k ...passed 00:18:17.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:17.073 Test: blockdev comparev and writev ...[2024-07-15 14:04:15.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.018955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.018967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.018973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.019493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.019500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.019510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.019515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.020018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.020027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.020036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.020041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.020542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.020553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.020562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.073 [2024-07-15 14:04:15.020567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.073 passed 00:18:17.073 Test: blockdev nvme passthru rw ...passed 00:18:17.073 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:04:15.105691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.073 [2024-07-15 14:04:15.105701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.106065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.073 [2024-07-15 14:04:15.106074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.106412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.073 [2024-07-15 14:04:15.106419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.073 [2024-07-15 14:04:15.106730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.073 [2024-07-15 14:04:15.106737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.073 passed 00:18:17.073 Test: blockdev nvme admin passthru ...passed 00:18:17.073 Test: blockdev copy ...passed 00:18:17.073 00:18:17.073 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.073 suites 1 1 n/a 0 0 00:18:17.073 tests 23 23 23 0 0 00:18:17.073 asserts 152 152 152 0 n/a 00:18:17.073 00:18:17.073 Elapsed time = 1.053 seconds 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.334 rmmod nvme_tcp 00:18:17.334 rmmod nvme_fabrics 00:18:17.334 rmmod nvme_keyring 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1362089 ']' 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1362089 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1362089 ']' 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1362089 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1362089 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1362089' 00:18:17.334 killing process with pid 1362089 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1362089 00:18:17.334 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1362089 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.595 14:04:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.144 14:04:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.144 00:18:20.144 real 0m12.711s 00:18:20.144 user 0m13.003s 00:18:20.144 sys 0m6.534s 00:18:20.144 14:04:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.144 14:04:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 ************************************ 00:18:20.144 END TEST nvmf_bdevio 00:18:20.144 ************************************ 00:18:20.144 14:04:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:20.144 14:04:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.144 14:04:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.144 14:04:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.144 14:04:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 ************************************ 00:18:20.144 START TEST nvmf_auth_target 00:18:20.144 ************************************ 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.144 * Looking for test storage... 00:18:20.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.144 14:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.280 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.281 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.281 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.281 14:04:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:18:28.281 00:18:28.281 --- 10.0.0.2 ping statistics --- 00:18:28.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.281 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:18:28.281 00:18:28.281 --- 10.0.0.1 ping statistics --- 00:18:28.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.281 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1367128 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1367128 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1367128 ']' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.281 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.851 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.851 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.851 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.851 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.851 14:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1367300 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:28.852 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b75bff8c6878e456f6986184f8c72d1acce0b8f01488cdec 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.P9B 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b75bff8c6878e456f6986184f8c72d1acce0b8f01488cdec 0 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b75bff8c6878e456f6986184f8c72d1acce0b8f01488cdec 0 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b75bff8c6878e456f6986184f8c72d1acce0b8f01488cdec 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:29.112 14:04:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.P9B 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.P9B 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.P9B 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3725f06b818e36712cd40fc0f0ec5cc8881fbb688b20192f64f015aa3520779c 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kh5 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3725f06b818e36712cd40fc0f0ec5cc8881fbb688b20192f64f015aa3520779c 3 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3725f06b818e36712cd40fc0f0ec5cc8881fbb688b20192f64f015aa3520779c 3 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3725f06b818e36712cd40fc0f0ec5cc8881fbb688b20192f64f015aa3520779c 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kh5 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kh5 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kh5 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=176988f61b8f48be05356593e3547de4 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mRB 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 176988f61b8f48be05356593e3547de4 1 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 176988f61b8f48be05356593e3547de4 1 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=176988f61b8f48be05356593e3547de4 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mRB 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mRB 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.mRB 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.112 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7d46c95d14cdee47a466b50423b8da86063dd7dca447e37f 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0KP 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7d46c95d14cdee47a466b50423b8da86063dd7dca447e37f 2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7d46c95d14cdee47a466b50423b8da86063dd7dca447e37f 2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7d46c95d14cdee47a466b50423b8da86063dd7dca447e37f 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0KP 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0KP 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.0KP 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f4979855c58782453505d60276d3cd4fb836fc178ce192f5 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5bC 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f4979855c58782453505d60276d3cd4fb836fc178ce192f5 2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f4979855c58782453505d60276d3cd4fb836fc178ce192f5 2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f4979855c58782453505d60276d3cd4fb836fc178ce192f5 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:29.113 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5bC 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5bC 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.5bC 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e81a2b5e9901d8bc4698eda59e99760e 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XLu 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e81a2b5e9901d8bc4698eda59e99760e 1 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e81a2b5e9901d8bc4698eda59e99760e 1 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e81a2b5e9901d8bc4698eda59e99760e 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XLu 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XLu 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.XLu 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=400d49f74c2d5413f3324120d2fd032610a1f836b26fe4446cbb956e700be481 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lvi 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 400d49f74c2d5413f3324120d2fd032610a1f836b26fe4446cbb956e700be481 3 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 400d49f74c2d5413f3324120d2fd032610a1f836b26fe4446cbb956e700be481 3 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=400d49f74c2d5413f3324120d2fd032610a1f836b26fe4446cbb956e700be481 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lvi 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lvi 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.lvi 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1367128 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1367128 ']' 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.373 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1367300 /var/tmp/host.sock 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1367300 ']' 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:29.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.P9B 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.P9B 00:18:29.634 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.P9B 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kh5 ]] 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kh5 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kh5 00:18:29.894 14:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kh5 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mRB 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.mRB 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.mRB 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.0KP ]] 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0KP 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0KP 00:18:30.154 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0KP 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5bC 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5bC 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5bC 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.XLu ]] 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLu 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLu 00:18:30.415 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLu 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lvi 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lvi 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lvi 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.686 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.946 14:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.206 00:18:31.206 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.206 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.206 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.467 { 00:18:31.467 "cntlid": 1, 00:18:31.467 "qid": 0, 00:18:31.467 "state": "enabled", 00:18:31.467 "thread": "nvmf_tgt_poll_group_000", 00:18:31.467 "listen_address": { 00:18:31.467 "trtype": "TCP", 00:18:31.467 "adrfam": "IPv4", 00:18:31.467 "traddr": "10.0.0.2", 00:18:31.467 "trsvcid": "4420" 00:18:31.467 }, 00:18:31.467 "peer_address": { 00:18:31.467 "trtype": "TCP", 00:18:31.467 "adrfam": "IPv4", 00:18:31.467 "traddr": "10.0.0.1", 00:18:31.467 "trsvcid": "35648" 00:18:31.467 }, 00:18:31.467 "auth": { 00:18:31.467 "state": "completed", 00:18:31.467 "digest": "sha256", 00:18:31.467 "dhgroup": "null" 00:18:31.467 } 00:18:31.467 } 00:18:31.467 ]' 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.467 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.727 14:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.298 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.559 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.818 00:18:32.818 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.818 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.818 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.078 { 00:18:33.078 "cntlid": 3, 00:18:33.078 "qid": 0, 00:18:33.078 "state": "enabled", 00:18:33.078 "thread": "nvmf_tgt_poll_group_000", 00:18:33.078 "listen_address": { 00:18:33.078 "trtype": "TCP", 00:18:33.078 "adrfam": "IPv4", 00:18:33.078 "traddr": "10.0.0.2", 00:18:33.078 "trsvcid": "4420" 00:18:33.078 }, 00:18:33.078 "peer_address": { 00:18:33.078 "trtype": "TCP", 00:18:33.078 "adrfam": "IPv4", 00:18:33.078 "traddr": "10.0.0.1", 00:18:33.078 "trsvcid": "35684" 00:18:33.078 }, 00:18:33.078 "auth": { 00:18:33.078 "state": "completed", 00:18:33.078 "digest": "sha256", 00:18:33.078 "dhgroup": "null" 00:18:33.078 } 00:18:33.078 } 00:18:33.078 ]' 00:18:33.078 14:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.078 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.336 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.905 14:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.165 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.166 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.166 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.426 00:18:34.426 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.426 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.426 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.686 { 00:18:34.686 "cntlid": 5, 00:18:34.686 "qid": 0, 00:18:34.686 "state": "enabled", 00:18:34.686 "thread": "nvmf_tgt_poll_group_000", 00:18:34.686 "listen_address": { 00:18:34.686 "trtype": "TCP", 00:18:34.686 "adrfam": "IPv4", 00:18:34.686 "traddr": "10.0.0.2", 00:18:34.686 "trsvcid": "4420" 00:18:34.686 }, 00:18:34.686 "peer_address": { 00:18:34.686 "trtype": "TCP", 00:18:34.686 "adrfam": "IPv4", 00:18:34.686 "traddr": "10.0.0.1", 00:18:34.686 "trsvcid": "35706" 00:18:34.686 }, 00:18:34.686 "auth": { 00:18:34.686 "state": "completed", 00:18:34.686 "digest": "sha256", 00:18:34.686 "dhgroup": "null" 00:18:34.686 } 00:18:34.686 } 00:18:34.686 ]' 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.686 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.946 14:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.516 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.517 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.776 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.035 00:18:36.035 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.035 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.035 14:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.035 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.035 { 00:18:36.035 "cntlid": 7, 00:18:36.035 "qid": 0, 00:18:36.035 "state": "enabled", 00:18:36.035 "thread": "nvmf_tgt_poll_group_000", 00:18:36.035 "listen_address": { 00:18:36.035 "trtype": "TCP", 00:18:36.035 "adrfam": "IPv4", 00:18:36.035 "traddr": "10.0.0.2", 00:18:36.035 "trsvcid": "4420" 00:18:36.035 }, 00:18:36.035 "peer_address": { 00:18:36.035 "trtype": "TCP", 00:18:36.035 "adrfam": "IPv4", 00:18:36.035 "traddr": "10.0.0.1", 00:18:36.035 "trsvcid": "42884" 00:18:36.035 }, 00:18:36.035 "auth": { 00:18:36.035 "state": "completed", 00:18:36.035 "digest": "sha256", 00:18:36.035 "dhgroup": "null" 00:18:36.035 } 00:18:36.035 } 00:18:36.035 ]' 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.293 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.552 14:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.122 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.382 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.641 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.641 { 00:18:37.641 "cntlid": 9, 00:18:37.641 "qid": 0, 00:18:37.641 "state": "enabled", 00:18:37.641 "thread": "nvmf_tgt_poll_group_000", 00:18:37.641 "listen_address": { 00:18:37.641 "trtype": "TCP", 00:18:37.641 "adrfam": "IPv4", 00:18:37.641 "traddr": "10.0.0.2", 00:18:37.641 "trsvcid": "4420" 00:18:37.641 }, 00:18:37.641 "peer_address": { 00:18:37.641 "trtype": "TCP", 00:18:37.641 "adrfam": "IPv4", 00:18:37.641 "traddr": "10.0.0.1", 00:18:37.641 "trsvcid": "42922" 00:18:37.641 }, 00:18:37.641 "auth": { 00:18:37.641 "state": "completed", 00:18:37.641 "digest": "sha256", 00:18:37.641 "dhgroup": "ffdhe2048" 00:18:37.641 } 00:18:37.641 } 00:18:37.641 ]' 00:18:37.641 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.901 14:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.160 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.730 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.991 14:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.252 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.252 { 00:18:39.252 "cntlid": 11, 00:18:39.252 "qid": 0, 00:18:39.252 "state": "enabled", 00:18:39.252 "thread": "nvmf_tgt_poll_group_000", 00:18:39.252 "listen_address": { 00:18:39.252 "trtype": "TCP", 00:18:39.252 "adrfam": "IPv4", 00:18:39.252 "traddr": "10.0.0.2", 00:18:39.252 "trsvcid": "4420" 00:18:39.252 }, 00:18:39.252 "peer_address": { 00:18:39.252 "trtype": "TCP", 00:18:39.252 "adrfam": "IPv4", 00:18:39.252 "traddr": "10.0.0.1", 00:18:39.252 "trsvcid": "42954" 00:18:39.252 }, 00:18:39.252 "auth": { 00:18:39.252 "state": "completed", 00:18:39.252 "digest": "sha256", 00:18:39.252 "dhgroup": "ffdhe2048" 00:18:39.252 } 00:18:39.252 } 00:18:39.252 ]' 00:18:39.252 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.513 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.778 14:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.420 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.679 00:18:40.679 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.679 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.679 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.941 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.941 { 00:18:40.941 "cntlid": 13, 00:18:40.941 "qid": 0, 00:18:40.941 "state": "enabled", 00:18:40.941 "thread": "nvmf_tgt_poll_group_000", 00:18:40.941 "listen_address": { 00:18:40.941 "trtype": "TCP", 00:18:40.941 "adrfam": "IPv4", 00:18:40.941 "traddr": "10.0.0.2", 00:18:40.941 "trsvcid": "4420" 00:18:40.941 }, 00:18:40.941 "peer_address": { 00:18:40.941 "trtype": "TCP", 00:18:40.941 "adrfam": "IPv4", 00:18:40.941 "traddr": "10.0.0.1", 00:18:40.941 "trsvcid": "42988" 00:18:40.941 }, 00:18:40.941 "auth": { 00:18:40.941 "state": "completed", 00:18:40.942 "digest": "sha256", 00:18:40.942 "dhgroup": "ffdhe2048" 00:18:40.942 } 00:18:40.942 } 00:18:40.942 ]' 00:18:40.942 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.942 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.942 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.942 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.942 14:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.942 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.942 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.942 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.202 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.142 14:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.142 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.402 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.402 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.660 { 00:18:42.660 "cntlid": 15, 00:18:42.660 "qid": 0, 00:18:42.660 "state": "enabled", 00:18:42.660 "thread": "nvmf_tgt_poll_group_000", 00:18:42.660 "listen_address": { 00:18:42.660 "trtype": "TCP", 00:18:42.660 "adrfam": "IPv4", 00:18:42.660 "traddr": "10.0.0.2", 00:18:42.660 "trsvcid": "4420" 00:18:42.660 }, 00:18:42.660 "peer_address": { 00:18:42.660 "trtype": "TCP", 00:18:42.660 "adrfam": "IPv4", 00:18:42.660 "traddr": "10.0.0.1", 00:18:42.660 "trsvcid": "43010" 00:18:42.660 }, 00:18:42.660 "auth": { 00:18:42.660 "state": "completed", 00:18:42.660 "digest": "sha256", 00:18:42.660 "dhgroup": "ffdhe2048" 00:18:42.660 } 00:18:42.660 } 00:18:42.660 ]' 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.660 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.920 14:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.490 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.751 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.012 00:18:44.012 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.012 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.012 14:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.012 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.012 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.012 14:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.012 14:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.273 { 00:18:44.273 "cntlid": 17, 00:18:44.273 "qid": 0, 00:18:44.273 "state": "enabled", 00:18:44.273 "thread": "nvmf_tgt_poll_group_000", 00:18:44.273 "listen_address": { 00:18:44.273 "trtype": "TCP", 00:18:44.273 "adrfam": "IPv4", 00:18:44.273 "traddr": "10.0.0.2", 00:18:44.273 "trsvcid": "4420" 00:18:44.273 }, 00:18:44.273 "peer_address": { 00:18:44.273 "trtype": "TCP", 00:18:44.273 "adrfam": "IPv4", 00:18:44.273 "traddr": "10.0.0.1", 00:18:44.273 "trsvcid": "43038" 00:18:44.273 }, 00:18:44.273 "auth": { 00:18:44.273 "state": "completed", 00:18:44.273 "digest": "sha256", 00:18:44.273 "dhgroup": "ffdhe3072" 00:18:44.273 } 00:18:44.273 } 00:18:44.273 ]' 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.273 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.533 14:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.105 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.365 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.626 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.626 14:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.885 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.885 { 00:18:45.885 "cntlid": 19, 00:18:45.885 "qid": 0, 00:18:45.885 "state": "enabled", 00:18:45.885 "thread": "nvmf_tgt_poll_group_000", 00:18:45.885 "listen_address": { 00:18:45.885 "trtype": "TCP", 00:18:45.885 "adrfam": "IPv4", 00:18:45.885 "traddr": "10.0.0.2", 00:18:45.885 "trsvcid": "4420" 00:18:45.885 }, 00:18:45.885 "peer_address": { 00:18:45.885 "trtype": "TCP", 00:18:45.885 "adrfam": "IPv4", 00:18:45.885 "traddr": "10.0.0.1", 00:18:45.885 "trsvcid": "36430" 00:18:45.885 }, 00:18:45.885 "auth": { 00:18:45.885 "state": "completed", 00:18:45.885 "digest": "sha256", 00:18:45.885 "dhgroup": "ffdhe3072" 00:18:45.885 } 00:18:45.885 } 00:18:45.885 ]' 00:18:45.885 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.885 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.886 14:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.146 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.717 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.977 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.978 14:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.238 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.238 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.238 { 00:18:47.238 "cntlid": 21, 00:18:47.238 "qid": 0, 00:18:47.238 "state": "enabled", 00:18:47.239 "thread": "nvmf_tgt_poll_group_000", 00:18:47.239 "listen_address": { 00:18:47.239 "trtype": "TCP", 00:18:47.239 "adrfam": "IPv4", 00:18:47.239 "traddr": "10.0.0.2", 00:18:47.239 "trsvcid": "4420" 00:18:47.239 }, 00:18:47.239 "peer_address": { 00:18:47.239 "trtype": "TCP", 00:18:47.239 "adrfam": "IPv4", 00:18:47.239 "traddr": "10.0.0.1", 00:18:47.239 "trsvcid": "36456" 00:18:47.239 }, 00:18:47.239 "auth": { 00:18:47.239 "state": "completed", 00:18:47.239 "digest": "sha256", 00:18:47.239 "dhgroup": "ffdhe3072" 00:18:47.239 } 00:18:47.239 } 00:18:47.239 ]' 00:18:47.239 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.239 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.500 14:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.442 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.443 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.704 00:18:48.704 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.704 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.704 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.964 { 00:18:48.964 "cntlid": 23, 00:18:48.964 "qid": 0, 00:18:48.964 "state": "enabled", 00:18:48.964 "thread": "nvmf_tgt_poll_group_000", 00:18:48.964 "listen_address": { 00:18:48.964 "trtype": "TCP", 00:18:48.964 "adrfam": "IPv4", 00:18:48.964 "traddr": "10.0.0.2", 00:18:48.964 "trsvcid": "4420" 00:18:48.964 }, 00:18:48.964 "peer_address": { 00:18:48.964 "trtype": "TCP", 00:18:48.964 "adrfam": "IPv4", 00:18:48.964 "traddr": "10.0.0.1", 00:18:48.964 "trsvcid": "36484" 00:18:48.964 }, 00:18:48.964 "auth": { 00:18:48.964 "state": "completed", 00:18:48.964 "digest": "sha256", 00:18:48.964 "dhgroup": "ffdhe3072" 00:18:48.964 } 00:18:48.964 } 00:18:48.964 ]' 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.964 14:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.225 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:18:49.795 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.796 14:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.056 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.317 00:18:50.317 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.317 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.317 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.578 { 00:18:50.578 "cntlid": 25, 00:18:50.578 "qid": 0, 00:18:50.578 "state": "enabled", 00:18:50.578 "thread": "nvmf_tgt_poll_group_000", 00:18:50.578 "listen_address": { 00:18:50.578 "trtype": "TCP", 00:18:50.578 "adrfam": "IPv4", 00:18:50.578 "traddr": "10.0.0.2", 00:18:50.578 "trsvcid": "4420" 00:18:50.578 }, 00:18:50.578 "peer_address": { 00:18:50.578 "trtype": "TCP", 00:18:50.578 "adrfam": "IPv4", 00:18:50.578 "traddr": "10.0.0.1", 00:18:50.578 "trsvcid": "36508" 00:18:50.578 }, 00:18:50.578 "auth": { 00:18:50.578 "state": "completed", 00:18:50.578 "digest": "sha256", 00:18:50.578 "dhgroup": "ffdhe4096" 00:18:50.578 } 00:18:50.578 } 00:18:50.578 ]' 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.578 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.579 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.579 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.579 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.579 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.840 14:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.411 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.672 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.933 00:18:51.933 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.933 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.933 14:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.193 { 00:18:52.193 "cntlid": 27, 00:18:52.193 "qid": 0, 00:18:52.193 "state": "enabled", 00:18:52.193 "thread": "nvmf_tgt_poll_group_000", 00:18:52.193 "listen_address": { 00:18:52.193 "trtype": "TCP", 00:18:52.193 "adrfam": "IPv4", 00:18:52.193 "traddr": "10.0.0.2", 00:18:52.193 "trsvcid": "4420" 00:18:52.193 }, 00:18:52.193 "peer_address": { 00:18:52.193 "trtype": "TCP", 00:18:52.193 "adrfam": "IPv4", 00:18:52.193 "traddr": "10.0.0.1", 00:18:52.193 "trsvcid": "36530" 00:18:52.193 }, 00:18:52.193 "auth": { 00:18:52.193 "state": "completed", 00:18:52.193 "digest": "sha256", 00:18:52.193 "dhgroup": "ffdhe4096" 00:18:52.193 } 00:18:52.193 } 00:18:52.193 ]' 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.193 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.454 14:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:18:53.026 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.026 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.026 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.287 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.569 00:18:53.569 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.569 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.569 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.829 { 00:18:53.829 "cntlid": 29, 00:18:53.829 "qid": 0, 00:18:53.829 "state": "enabled", 00:18:53.829 "thread": "nvmf_tgt_poll_group_000", 00:18:53.829 "listen_address": { 00:18:53.829 "trtype": "TCP", 00:18:53.829 "adrfam": "IPv4", 00:18:53.829 "traddr": "10.0.0.2", 00:18:53.829 "trsvcid": "4420" 00:18:53.829 }, 00:18:53.829 "peer_address": { 00:18:53.829 "trtype": "TCP", 00:18:53.829 "adrfam": "IPv4", 00:18:53.829 "traddr": "10.0.0.1", 00:18:53.829 "trsvcid": "36542" 00:18:53.829 }, 00:18:53.829 "auth": { 00:18:53.829 "state": "completed", 00:18:53.829 "digest": "sha256", 00:18:53.829 "dhgroup": "ffdhe4096" 00:18:53.829 } 00:18:53.829 } 00:18:53.829 ]' 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.829 14:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.088 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.660 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.921 14:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.181 00:18:55.181 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.181 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.181 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.442 { 00:18:55.442 "cntlid": 31, 00:18:55.442 "qid": 0, 00:18:55.442 "state": "enabled", 00:18:55.442 "thread": "nvmf_tgt_poll_group_000", 00:18:55.442 "listen_address": { 00:18:55.442 "trtype": "TCP", 00:18:55.442 "adrfam": "IPv4", 00:18:55.442 "traddr": "10.0.0.2", 00:18:55.442 "trsvcid": "4420" 00:18:55.442 }, 00:18:55.442 "peer_address": { 00:18:55.442 "trtype": "TCP", 00:18:55.442 "adrfam": "IPv4", 00:18:55.442 "traddr": "10.0.0.1", 00:18:55.442 "trsvcid": "36586" 00:18:55.442 }, 00:18:55.442 "auth": { 00:18:55.442 "state": "completed", 00:18:55.442 "digest": "sha256", 00:18:55.442 "dhgroup": "ffdhe4096" 00:18:55.442 } 00:18:55.442 } 00:18:55.442 ]' 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.442 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.702 14:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.272 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.533 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.794 00:18:56.794 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.794 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.794 14:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.054 { 00:18:57.054 "cntlid": 33, 00:18:57.054 "qid": 0, 00:18:57.054 "state": "enabled", 00:18:57.054 "thread": "nvmf_tgt_poll_group_000", 00:18:57.054 "listen_address": { 00:18:57.054 "trtype": "TCP", 00:18:57.054 "adrfam": "IPv4", 00:18:57.054 "traddr": "10.0.0.2", 00:18:57.054 "trsvcid": "4420" 00:18:57.054 }, 00:18:57.054 "peer_address": { 00:18:57.054 "trtype": "TCP", 00:18:57.054 "adrfam": "IPv4", 00:18:57.054 "traddr": "10.0.0.1", 00:18:57.054 "trsvcid": "45440" 00:18:57.054 }, 00:18:57.054 "auth": { 00:18:57.054 "state": "completed", 00:18:57.054 "digest": "sha256", 00:18:57.054 "dhgroup": "ffdhe6144" 00:18:57.054 } 00:18:57.054 } 00:18:57.054 ]' 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.054 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.314 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.314 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.314 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.314 14:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.254 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.255 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.518 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.819 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.819 { 00:18:58.819 "cntlid": 35, 00:18:58.819 "qid": 0, 00:18:58.819 "state": "enabled", 00:18:58.819 "thread": "nvmf_tgt_poll_group_000", 00:18:58.819 "listen_address": { 00:18:58.819 "trtype": "TCP", 00:18:58.819 "adrfam": "IPv4", 00:18:58.819 "traddr": "10.0.0.2", 00:18:58.819 "trsvcid": "4420" 00:18:58.819 }, 00:18:58.819 "peer_address": { 00:18:58.819 "trtype": "TCP", 00:18:58.819 "adrfam": "IPv4", 00:18:58.819 "traddr": "10.0.0.1", 00:18:58.819 "trsvcid": "45454" 00:18:58.819 }, 00:18:58.819 "auth": { 00:18:58.819 "state": "completed", 00:18:58.819 "digest": "sha256", 00:18:58.819 "dhgroup": "ffdhe6144" 00:18:58.819 } 00:18:58.820 } 00:18:58.820 ]' 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.820 14:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.080 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.022 14:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.282 00:19:00.282 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.282 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.282 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.542 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.542 { 00:19:00.542 "cntlid": 37, 00:19:00.543 "qid": 0, 00:19:00.543 "state": "enabled", 00:19:00.543 "thread": "nvmf_tgt_poll_group_000", 00:19:00.543 "listen_address": { 00:19:00.543 "trtype": "TCP", 00:19:00.543 "adrfam": "IPv4", 00:19:00.543 "traddr": "10.0.0.2", 00:19:00.543 "trsvcid": "4420" 00:19:00.543 }, 00:19:00.543 "peer_address": { 00:19:00.543 "trtype": "TCP", 00:19:00.543 "adrfam": "IPv4", 00:19:00.543 "traddr": "10.0.0.1", 00:19:00.543 "trsvcid": "45474" 00:19:00.543 }, 00:19:00.543 "auth": { 00:19:00.543 "state": "completed", 00:19:00.543 "digest": "sha256", 00:19:00.543 "dhgroup": "ffdhe6144" 00:19:00.543 } 00:19:00.543 } 00:19:00.543 ]' 00:19:00.543 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.543 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.543 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.543 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.543 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.803 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.803 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.803 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.803 14:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.742 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:01.743 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.743 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.743 14:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.743 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.743 14:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.003 00:19:02.003 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.003 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.003 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.262 { 00:19:02.262 "cntlid": 39, 00:19:02.262 "qid": 0, 00:19:02.262 "state": "enabled", 00:19:02.262 "thread": "nvmf_tgt_poll_group_000", 00:19:02.262 "listen_address": { 00:19:02.262 "trtype": "TCP", 00:19:02.262 "adrfam": "IPv4", 00:19:02.262 "traddr": "10.0.0.2", 00:19:02.262 "trsvcid": "4420" 00:19:02.262 }, 00:19:02.262 "peer_address": { 00:19:02.262 "trtype": "TCP", 00:19:02.262 "adrfam": "IPv4", 00:19:02.262 "traddr": "10.0.0.1", 00:19:02.262 "trsvcid": "45498" 00:19:02.262 }, 00:19:02.262 "auth": { 00:19:02.262 "state": "completed", 00:19:02.262 "digest": "sha256", 00:19:02.262 "dhgroup": "ffdhe6144" 00:19:02.262 } 00:19:02.262 } 00:19:02.262 ]' 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.262 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.522 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.522 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.522 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.522 14:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.461 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.031 00:19:04.031 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.031 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.031 14:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.031 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.031 { 00:19:04.031 "cntlid": 41, 00:19:04.031 "qid": 0, 00:19:04.031 "state": "enabled", 00:19:04.031 "thread": "nvmf_tgt_poll_group_000", 00:19:04.031 "listen_address": { 00:19:04.031 "trtype": "TCP", 00:19:04.031 "adrfam": "IPv4", 00:19:04.031 "traddr": "10.0.0.2", 00:19:04.031 "trsvcid": "4420" 00:19:04.031 }, 00:19:04.031 "peer_address": { 00:19:04.031 "trtype": "TCP", 00:19:04.031 "adrfam": "IPv4", 00:19:04.031 "traddr": "10.0.0.1", 00:19:04.031 "trsvcid": "45538" 00:19:04.031 }, 00:19:04.031 "auth": { 00:19:04.031 "state": "completed", 00:19:04.031 "digest": "sha256", 00:19:04.031 "dhgroup": "ffdhe8192" 00:19:04.031 } 00:19:04.031 } 00:19:04.032 ]' 00:19:04.032 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.292 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.551 14:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.121 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.381 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.951 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.951 { 00:19:05.951 "cntlid": 43, 00:19:05.951 "qid": 0, 00:19:05.951 "state": "enabled", 00:19:05.951 "thread": "nvmf_tgt_poll_group_000", 00:19:05.951 "listen_address": { 00:19:05.951 "trtype": "TCP", 00:19:05.951 "adrfam": "IPv4", 00:19:05.951 "traddr": "10.0.0.2", 00:19:05.951 "trsvcid": "4420" 00:19:05.951 }, 00:19:05.951 "peer_address": { 00:19:05.951 "trtype": "TCP", 00:19:05.951 "adrfam": "IPv4", 00:19:05.951 "traddr": "10.0.0.1", 00:19:05.951 "trsvcid": "33198" 00:19:05.951 }, 00:19:05.951 "auth": { 00:19:05.951 "state": "completed", 00:19:05.951 "digest": "sha256", 00:19:05.951 "dhgroup": "ffdhe8192" 00:19:05.951 } 00:19:05.951 } 00:19:05.951 ]' 00:19:05.951 14:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.951 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.951 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.211 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.151 14:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.151 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.720 00:19:07.720 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.720 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.720 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.980 { 00:19:07.980 "cntlid": 45, 00:19:07.980 "qid": 0, 00:19:07.980 "state": "enabled", 00:19:07.980 "thread": "nvmf_tgt_poll_group_000", 00:19:07.980 "listen_address": { 00:19:07.980 "trtype": "TCP", 00:19:07.980 "adrfam": "IPv4", 00:19:07.980 "traddr": "10.0.0.2", 00:19:07.980 "trsvcid": "4420" 00:19:07.980 }, 00:19:07.980 "peer_address": { 00:19:07.980 "trtype": "TCP", 00:19:07.980 "adrfam": "IPv4", 00:19:07.980 "traddr": "10.0.0.1", 00:19:07.980 "trsvcid": "33232" 00:19:07.980 }, 00:19:07.980 "auth": { 00:19:07.980 "state": "completed", 00:19:07.980 "digest": "sha256", 00:19:07.980 "dhgroup": "ffdhe8192" 00:19:07.980 } 00:19:07.980 } 00:19:07.980 ]' 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.980 14:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.980 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.980 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.980 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.240 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:08.810 14:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.071 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.641 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.641 { 00:19:09.641 "cntlid": 47, 00:19:09.641 "qid": 0, 00:19:09.641 "state": "enabled", 00:19:09.641 "thread": "nvmf_tgt_poll_group_000", 00:19:09.641 "listen_address": { 00:19:09.641 "trtype": "TCP", 00:19:09.641 "adrfam": "IPv4", 00:19:09.641 "traddr": "10.0.0.2", 00:19:09.641 "trsvcid": "4420" 00:19:09.641 }, 00:19:09.641 "peer_address": { 00:19:09.641 "trtype": "TCP", 00:19:09.641 "adrfam": "IPv4", 00:19:09.641 "traddr": "10.0.0.1", 00:19:09.641 "trsvcid": "33250" 00:19:09.641 }, 00:19:09.641 "auth": { 00:19:09.641 "state": "completed", 00:19:09.641 "digest": "sha256", 00:19:09.641 "dhgroup": "ffdhe8192" 00:19:09.641 } 00:19:09.641 } 00:19:09.641 ]' 00:19:09.641 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.901 14:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.161 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:10.731 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.731 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:10.731 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.732 14:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.992 00:19:10.992 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.992 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.992 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.252 { 00:19:11.252 "cntlid": 49, 00:19:11.252 "qid": 0, 00:19:11.252 "state": "enabled", 00:19:11.252 "thread": "nvmf_tgt_poll_group_000", 00:19:11.252 "listen_address": { 00:19:11.252 "trtype": "TCP", 00:19:11.252 "adrfam": "IPv4", 00:19:11.252 "traddr": "10.0.0.2", 00:19:11.252 "trsvcid": "4420" 00:19:11.252 }, 00:19:11.252 "peer_address": { 00:19:11.252 "trtype": "TCP", 00:19:11.252 "adrfam": "IPv4", 00:19:11.252 "traddr": "10.0.0.1", 00:19:11.252 "trsvcid": "33274" 00:19:11.252 }, 00:19:11.252 "auth": { 00:19:11.252 "state": "completed", 00:19:11.252 "digest": "sha384", 00:19:11.252 "dhgroup": "null" 00:19:11.252 } 00:19:11.252 } 00:19:11.252 ]' 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.252 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.513 14:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.081 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.341 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.341 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.601 { 00:19:12.601 "cntlid": 51, 00:19:12.601 "qid": 0, 00:19:12.601 "state": "enabled", 00:19:12.601 "thread": "nvmf_tgt_poll_group_000", 00:19:12.601 "listen_address": { 00:19:12.601 "trtype": "TCP", 00:19:12.601 "adrfam": "IPv4", 00:19:12.601 "traddr": "10.0.0.2", 00:19:12.601 "trsvcid": "4420" 00:19:12.601 }, 00:19:12.601 "peer_address": { 00:19:12.601 "trtype": "TCP", 00:19:12.601 "adrfam": "IPv4", 00:19:12.601 "traddr": "10.0.0.1", 00:19:12.601 "trsvcid": "33308" 00:19:12.601 }, 00:19:12.601 "auth": { 00:19:12.601 "state": "completed", 00:19:12.601 "digest": "sha384", 00:19:12.601 "dhgroup": "null" 00:19:12.601 } 00:19:12.601 } 00:19:12.601 ]' 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.601 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.861 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.861 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.861 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.861 14:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.800 14:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.060 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.060 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.060 { 00:19:14.060 "cntlid": 53, 00:19:14.060 "qid": 0, 00:19:14.060 "state": "enabled", 00:19:14.060 "thread": "nvmf_tgt_poll_group_000", 00:19:14.060 "listen_address": { 00:19:14.060 "trtype": "TCP", 00:19:14.060 "adrfam": "IPv4", 00:19:14.060 "traddr": "10.0.0.2", 00:19:14.060 "trsvcid": "4420" 00:19:14.060 }, 00:19:14.060 "peer_address": { 00:19:14.060 "trtype": "TCP", 00:19:14.060 "adrfam": "IPv4", 00:19:14.060 "traddr": "10.0.0.1", 00:19:14.060 "trsvcid": "33328" 00:19:14.060 }, 00:19:14.060 "auth": { 00:19:14.060 "state": "completed", 00:19:14.060 "digest": "sha384", 00:19:14.060 "dhgroup": "null" 00:19:14.060 } 00:19:14.060 } 00:19:14.060 ]' 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.320 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.580 14:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.150 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.412 00:19:15.412 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.412 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.412 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.673 { 00:19:15.673 "cntlid": 55, 00:19:15.673 "qid": 0, 00:19:15.673 "state": "enabled", 00:19:15.673 "thread": "nvmf_tgt_poll_group_000", 00:19:15.673 "listen_address": { 00:19:15.673 "trtype": "TCP", 00:19:15.673 "adrfam": "IPv4", 00:19:15.673 "traddr": "10.0.0.2", 00:19:15.673 "trsvcid": "4420" 00:19:15.673 }, 00:19:15.673 "peer_address": { 00:19:15.673 "trtype": "TCP", 00:19:15.673 "adrfam": "IPv4", 00:19:15.673 "traddr": "10.0.0.1", 00:19:15.673 "trsvcid": "33354" 00:19:15.673 }, 00:19:15.673 "auth": { 00:19:15.673 "state": "completed", 00:19:15.673 "digest": "sha384", 00:19:15.673 "dhgroup": "null" 00:19:15.673 } 00:19:15.673 } 00:19:15.673 ]' 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.673 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.674 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.674 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.674 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.935 14:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.878 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.879 14:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.140 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.140 { 00:19:17.140 "cntlid": 57, 00:19:17.140 "qid": 0, 00:19:17.140 "state": "enabled", 00:19:17.140 "thread": "nvmf_tgt_poll_group_000", 00:19:17.140 "listen_address": { 00:19:17.140 "trtype": "TCP", 00:19:17.140 "adrfam": "IPv4", 00:19:17.140 "traddr": "10.0.0.2", 00:19:17.140 "trsvcid": "4420" 00:19:17.140 }, 00:19:17.140 "peer_address": { 00:19:17.140 "trtype": "TCP", 00:19:17.140 "adrfam": "IPv4", 00:19:17.140 "traddr": "10.0.0.1", 00:19:17.140 "trsvcid": "45426" 00:19:17.140 }, 00:19:17.140 "auth": { 00:19:17.140 "state": "completed", 00:19:17.140 "digest": "sha384", 00:19:17.140 "dhgroup": "ffdhe2048" 00:19:17.140 } 00:19:17.140 } 00:19:17.140 ]' 00:19:17.140 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.401 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.661 14:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.231 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.492 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.492 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.753 { 00:19:18.753 "cntlid": 59, 00:19:18.753 "qid": 0, 00:19:18.753 "state": "enabled", 00:19:18.753 "thread": "nvmf_tgt_poll_group_000", 00:19:18.753 "listen_address": { 00:19:18.753 "trtype": "TCP", 00:19:18.753 "adrfam": "IPv4", 00:19:18.753 "traddr": "10.0.0.2", 00:19:18.753 "trsvcid": "4420" 00:19:18.753 }, 00:19:18.753 "peer_address": { 00:19:18.753 "trtype": "TCP", 00:19:18.753 "adrfam": "IPv4", 00:19:18.753 "traddr": "10.0.0.1", 00:19:18.753 "trsvcid": "45450" 00:19:18.753 }, 00:19:18.753 "auth": { 00:19:18.753 "state": "completed", 00:19:18.753 "digest": "sha384", 00:19:18.753 "dhgroup": "ffdhe2048" 00:19:18.753 } 00:19:18.753 } 00:19:18.753 ]' 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.753 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.054 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.054 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.054 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.054 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.054 14:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.054 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.021 14:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.282 00:19:20.282 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.282 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.282 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.542 { 00:19:20.542 "cntlid": 61, 00:19:20.542 "qid": 0, 00:19:20.542 "state": "enabled", 00:19:20.542 "thread": "nvmf_tgt_poll_group_000", 00:19:20.542 "listen_address": { 00:19:20.542 "trtype": "TCP", 00:19:20.542 "adrfam": "IPv4", 00:19:20.542 "traddr": "10.0.0.2", 00:19:20.542 "trsvcid": "4420" 00:19:20.542 }, 00:19:20.542 "peer_address": { 00:19:20.542 "trtype": "TCP", 00:19:20.542 "adrfam": "IPv4", 00:19:20.542 "traddr": "10.0.0.1", 00:19:20.542 "trsvcid": "45468" 00:19:20.542 }, 00:19:20.542 "auth": { 00:19:20.542 "state": "completed", 00:19:20.542 "digest": "sha384", 00:19:20.542 "dhgroup": "ffdhe2048" 00:19:20.542 } 00:19:20.542 } 00:19:20.542 ]' 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.542 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.802 14:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.374 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.634 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.893 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.894 14:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.894 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.894 { 00:19:21.894 "cntlid": 63, 00:19:21.894 "qid": 0, 00:19:21.894 "state": "enabled", 00:19:21.894 "thread": "nvmf_tgt_poll_group_000", 00:19:21.894 "listen_address": { 00:19:21.894 "trtype": "TCP", 00:19:21.894 "adrfam": "IPv4", 00:19:21.894 "traddr": "10.0.0.2", 00:19:21.894 "trsvcid": "4420" 00:19:21.894 }, 00:19:21.894 "peer_address": { 00:19:21.894 "trtype": "TCP", 00:19:21.894 "adrfam": "IPv4", 00:19:21.894 "traddr": "10.0.0.1", 00:19:21.894 "trsvcid": "45504" 00:19:21.894 }, 00:19:21.894 "auth": { 00:19:21.894 "state": "completed", 00:19:21.894 "digest": "sha384", 00:19:21.894 "dhgroup": "ffdhe2048" 00:19:21.894 } 00:19:21.894 } 00:19:21.894 ]' 00:19:21.894 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.154 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.155 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.415 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:22.986 14:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.986 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.246 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.506 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.506 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.506 { 00:19:23.506 "cntlid": 65, 00:19:23.506 "qid": 0, 00:19:23.506 "state": "enabled", 00:19:23.507 "thread": "nvmf_tgt_poll_group_000", 00:19:23.507 "listen_address": { 00:19:23.507 "trtype": "TCP", 00:19:23.507 "adrfam": "IPv4", 00:19:23.507 "traddr": "10.0.0.2", 00:19:23.507 "trsvcid": "4420" 00:19:23.507 }, 00:19:23.507 "peer_address": { 00:19:23.507 "trtype": "TCP", 00:19:23.507 "adrfam": "IPv4", 00:19:23.507 "traddr": "10.0.0.1", 00:19:23.507 "trsvcid": "45530" 00:19:23.507 }, 00:19:23.507 "auth": { 00:19:23.507 "state": "completed", 00:19:23.507 "digest": "sha384", 00:19:23.507 "dhgroup": "ffdhe3072" 00:19:23.507 } 00:19:23.507 } 00:19:23.507 ]' 00:19:23.507 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.766 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.025 14:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.595 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.855 14:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.115 00:19:25.115 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.116 { 00:19:25.116 "cntlid": 67, 00:19:25.116 "qid": 0, 00:19:25.116 "state": "enabled", 00:19:25.116 "thread": "nvmf_tgt_poll_group_000", 00:19:25.116 "listen_address": { 00:19:25.116 "trtype": "TCP", 00:19:25.116 "adrfam": "IPv4", 00:19:25.116 "traddr": "10.0.0.2", 00:19:25.116 "trsvcid": "4420" 00:19:25.116 }, 00:19:25.116 "peer_address": { 00:19:25.116 "trtype": "TCP", 00:19:25.116 "adrfam": "IPv4", 00:19:25.116 "traddr": "10.0.0.1", 00:19:25.116 "trsvcid": "45558" 00:19:25.116 }, 00:19:25.116 "auth": { 00:19:25.116 "state": "completed", 00:19:25.116 "digest": "sha384", 00:19:25.116 "dhgroup": "ffdhe3072" 00:19:25.116 } 00:19:25.116 } 00:19:25.116 ]' 00:19:25.116 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.375 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.635 14:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.207 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.467 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.727 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.727 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.727 { 00:19:26.727 "cntlid": 69, 00:19:26.727 "qid": 0, 00:19:26.727 "state": "enabled", 00:19:26.727 "thread": "nvmf_tgt_poll_group_000", 00:19:26.727 "listen_address": { 00:19:26.727 "trtype": "TCP", 00:19:26.727 "adrfam": "IPv4", 00:19:26.727 "traddr": "10.0.0.2", 00:19:26.727 "trsvcid": "4420" 00:19:26.727 }, 00:19:26.727 "peer_address": { 00:19:26.727 "trtype": "TCP", 00:19:26.727 "adrfam": "IPv4", 00:19:26.727 "traddr": "10.0.0.1", 00:19:26.727 "trsvcid": "60208" 00:19:26.727 }, 00:19:26.727 "auth": { 00:19:26.727 "state": "completed", 00:19:26.727 "digest": "sha384", 00:19:26.728 "dhgroup": "ffdhe3072" 00:19:26.728 } 00:19:26.728 } 00:19:26.728 ]' 00:19:26.728 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.989 14:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.249 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.820 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.080 14:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.080 14:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.080 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.080 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.341 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.341 { 00:19:28.341 "cntlid": 71, 00:19:28.341 "qid": 0, 00:19:28.341 "state": "enabled", 00:19:28.341 "thread": "nvmf_tgt_poll_group_000", 00:19:28.341 "listen_address": { 00:19:28.341 "trtype": "TCP", 00:19:28.341 "adrfam": "IPv4", 00:19:28.341 "traddr": "10.0.0.2", 00:19:28.341 "trsvcid": "4420" 00:19:28.341 }, 00:19:28.341 "peer_address": { 00:19:28.341 "trtype": "TCP", 00:19:28.341 "adrfam": "IPv4", 00:19:28.341 "traddr": "10.0.0.1", 00:19:28.341 "trsvcid": "60234" 00:19:28.341 }, 00:19:28.341 "auth": { 00:19:28.341 "state": "completed", 00:19:28.341 "digest": "sha384", 00:19:28.341 "dhgroup": "ffdhe3072" 00:19:28.341 } 00:19:28.341 } 00:19:28.341 ]' 00:19:28.341 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.601 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.861 14:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.433 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.695 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.955 00:19:29.955 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.955 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.955 14:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.955 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.955 { 00:19:29.955 "cntlid": 73, 00:19:29.955 "qid": 0, 00:19:29.955 "state": "enabled", 00:19:29.955 "thread": "nvmf_tgt_poll_group_000", 00:19:29.955 "listen_address": { 00:19:29.955 "trtype": "TCP", 00:19:29.955 "adrfam": "IPv4", 00:19:29.955 "traddr": "10.0.0.2", 00:19:29.955 "trsvcid": "4420" 00:19:29.955 }, 00:19:29.955 "peer_address": { 00:19:29.955 "trtype": "TCP", 00:19:29.955 "adrfam": "IPv4", 00:19:29.955 "traddr": "10.0.0.1", 00:19:29.955 "trsvcid": "60260" 00:19:29.955 }, 00:19:29.955 "auth": { 00:19:29.956 "state": "completed", 00:19:29.956 "digest": "sha384", 00:19:29.956 "dhgroup": "ffdhe4096" 00:19:29.956 } 00:19:29.956 } 00:19:29.956 ]' 00:19:29.956 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.216 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.477 14:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.049 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.309 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.310 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.570 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.570 { 00:19:31.570 "cntlid": 75, 00:19:31.570 "qid": 0, 00:19:31.570 "state": "enabled", 00:19:31.570 "thread": "nvmf_tgt_poll_group_000", 00:19:31.570 "listen_address": { 00:19:31.570 "trtype": "TCP", 00:19:31.570 "adrfam": "IPv4", 00:19:31.570 "traddr": "10.0.0.2", 00:19:31.570 "trsvcid": "4420" 00:19:31.570 }, 00:19:31.570 "peer_address": { 00:19:31.570 "trtype": "TCP", 00:19:31.570 "adrfam": "IPv4", 00:19:31.570 "traddr": "10.0.0.1", 00:19:31.570 "trsvcid": "60296" 00:19:31.570 }, 00:19:31.570 "auth": { 00:19:31.570 "state": "completed", 00:19:31.570 "digest": "sha384", 00:19:31.570 "dhgroup": "ffdhe4096" 00:19:31.570 } 00:19:31.570 } 00:19:31.570 ]' 00:19:31.570 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.831 14:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.772 14:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.032 00:19:33.032 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.032 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.032 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.293 { 00:19:33.293 "cntlid": 77, 00:19:33.293 "qid": 0, 00:19:33.293 "state": "enabled", 00:19:33.293 "thread": "nvmf_tgt_poll_group_000", 00:19:33.293 "listen_address": { 00:19:33.293 "trtype": "TCP", 00:19:33.293 "adrfam": "IPv4", 00:19:33.293 "traddr": "10.0.0.2", 00:19:33.293 "trsvcid": "4420" 00:19:33.293 }, 00:19:33.293 "peer_address": { 00:19:33.293 "trtype": "TCP", 00:19:33.293 "adrfam": "IPv4", 00:19:33.293 "traddr": "10.0.0.1", 00:19:33.293 "trsvcid": "60336" 00:19:33.293 }, 00:19:33.293 "auth": { 00:19:33.293 "state": "completed", 00:19:33.293 "digest": "sha384", 00:19:33.293 "dhgroup": "ffdhe4096" 00:19:33.293 } 00:19:33.293 } 00:19:33.293 ]' 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.293 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.553 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.553 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.553 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.553 14:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.494 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.754 00:19:34.754 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.754 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.754 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.014 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.014 { 00:19:35.014 "cntlid": 79, 00:19:35.014 "qid": 0, 00:19:35.014 "state": "enabled", 00:19:35.014 "thread": "nvmf_tgt_poll_group_000", 00:19:35.014 "listen_address": { 00:19:35.014 "trtype": "TCP", 00:19:35.014 "adrfam": "IPv4", 00:19:35.014 "traddr": "10.0.0.2", 00:19:35.014 "trsvcid": "4420" 00:19:35.014 }, 00:19:35.014 "peer_address": { 00:19:35.014 "trtype": "TCP", 00:19:35.014 "adrfam": "IPv4", 00:19:35.014 "traddr": "10.0.0.1", 00:19:35.014 "trsvcid": "60362" 00:19:35.014 }, 00:19:35.014 "auth": { 00:19:35.014 "state": "completed", 00:19:35.014 "digest": "sha384", 00:19:35.014 "dhgroup": "ffdhe4096" 00:19:35.014 } 00:19:35.014 } 00:19:35.014 ]' 00:19:35.015 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.015 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.015 14:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.015 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.015 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.015 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.015 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.015 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.274 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.861 14:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.121 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.381 00:19:36.381 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.381 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.381 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.641 { 00:19:36.641 "cntlid": 81, 00:19:36.641 "qid": 0, 00:19:36.641 "state": "enabled", 00:19:36.641 "thread": "nvmf_tgt_poll_group_000", 00:19:36.641 "listen_address": { 00:19:36.641 "trtype": "TCP", 00:19:36.641 "adrfam": "IPv4", 00:19:36.641 "traddr": "10.0.0.2", 00:19:36.641 "trsvcid": "4420" 00:19:36.641 }, 00:19:36.641 "peer_address": { 00:19:36.641 "trtype": "TCP", 00:19:36.641 "adrfam": "IPv4", 00:19:36.641 "traddr": "10.0.0.1", 00:19:36.641 "trsvcid": "43898" 00:19:36.641 }, 00:19:36.641 "auth": { 00:19:36.641 "state": "completed", 00:19:36.641 "digest": "sha384", 00:19:36.641 "dhgroup": "ffdhe6144" 00:19:36.641 } 00:19:36.641 } 00:19:36.641 ]' 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.641 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.901 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.901 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.901 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.901 14:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.841 14:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.101 00:19:38.101 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.101 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.101 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.389 { 00:19:38.389 "cntlid": 83, 00:19:38.389 "qid": 0, 00:19:38.389 "state": "enabled", 00:19:38.389 "thread": "nvmf_tgt_poll_group_000", 00:19:38.389 "listen_address": { 00:19:38.389 "trtype": "TCP", 00:19:38.389 "adrfam": "IPv4", 00:19:38.389 "traddr": "10.0.0.2", 00:19:38.389 "trsvcid": "4420" 00:19:38.389 }, 00:19:38.389 "peer_address": { 00:19:38.389 "trtype": "TCP", 00:19:38.389 "adrfam": "IPv4", 00:19:38.389 "traddr": "10.0.0.1", 00:19:38.389 "trsvcid": "43932" 00:19:38.389 }, 00:19:38.389 "auth": { 00:19:38.389 "state": "completed", 00:19:38.389 "digest": "sha384", 00:19:38.389 "dhgroup": "ffdhe6144" 00:19:38.389 } 00:19:38.389 } 00:19:38.389 ]' 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.389 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.669 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.669 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.669 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.669 14:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:39.247 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.506 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.765 00:19:40.025 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.025 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.025 14:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.025 { 00:19:40.025 "cntlid": 85, 00:19:40.025 "qid": 0, 00:19:40.025 "state": "enabled", 00:19:40.025 "thread": "nvmf_tgt_poll_group_000", 00:19:40.025 "listen_address": { 00:19:40.025 "trtype": "TCP", 00:19:40.025 "adrfam": "IPv4", 00:19:40.025 "traddr": "10.0.0.2", 00:19:40.025 "trsvcid": "4420" 00:19:40.025 }, 00:19:40.025 "peer_address": { 00:19:40.025 "trtype": "TCP", 00:19:40.025 "adrfam": "IPv4", 00:19:40.025 "traddr": "10.0.0.1", 00:19:40.025 "trsvcid": "43954" 00:19:40.025 }, 00:19:40.025 "auth": { 00:19:40.025 "state": "completed", 00:19:40.025 "digest": "sha384", 00:19:40.025 "dhgroup": "ffdhe6144" 00:19:40.025 } 00:19:40.025 } 00:19:40.025 ]' 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.025 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.285 14:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.224 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.225 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.484 00:19:41.743 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.743 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.744 { 00:19:41.744 "cntlid": 87, 00:19:41.744 "qid": 0, 00:19:41.744 "state": "enabled", 00:19:41.744 "thread": "nvmf_tgt_poll_group_000", 00:19:41.744 "listen_address": { 00:19:41.744 "trtype": "TCP", 00:19:41.744 "adrfam": "IPv4", 00:19:41.744 "traddr": "10.0.0.2", 00:19:41.744 "trsvcid": "4420" 00:19:41.744 }, 00:19:41.744 "peer_address": { 00:19:41.744 "trtype": "TCP", 00:19:41.744 "adrfam": "IPv4", 00:19:41.744 "traddr": "10.0.0.1", 00:19:41.744 "trsvcid": "43986" 00:19:41.744 }, 00:19:41.744 "auth": { 00:19:41.744 "state": "completed", 00:19:41.744 "digest": "sha384", 00:19:41.744 "dhgroup": "ffdhe6144" 00:19:41.744 } 00:19:41.744 } 00:19:41.744 ]' 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.744 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.003 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.003 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.003 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.003 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.003 14:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.003 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.943 14:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.540 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.540 14:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.800 { 00:19:43.800 "cntlid": 89, 00:19:43.800 "qid": 0, 00:19:43.800 "state": "enabled", 00:19:43.800 "thread": "nvmf_tgt_poll_group_000", 00:19:43.800 "listen_address": { 00:19:43.800 "trtype": "TCP", 00:19:43.800 "adrfam": "IPv4", 00:19:43.800 "traddr": "10.0.0.2", 00:19:43.800 "trsvcid": "4420" 00:19:43.800 }, 00:19:43.800 "peer_address": { 00:19:43.800 "trtype": "TCP", 00:19:43.800 "adrfam": "IPv4", 00:19:43.800 "traddr": "10.0.0.1", 00:19:43.800 "trsvcid": "44018" 00:19:43.800 }, 00:19:43.800 "auth": { 00:19:43.800 "state": "completed", 00:19:43.800 "digest": "sha384", 00:19:43.800 "dhgroup": "ffdhe8192" 00:19:43.800 } 00:19:43.800 } 00:19:43.800 ]' 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.800 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.060 14:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.631 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.892 14:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.461 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.461 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.461 { 00:19:45.461 "cntlid": 91, 00:19:45.461 "qid": 0, 00:19:45.461 "state": "enabled", 00:19:45.461 "thread": "nvmf_tgt_poll_group_000", 00:19:45.461 "listen_address": { 00:19:45.461 "trtype": "TCP", 00:19:45.461 "adrfam": "IPv4", 00:19:45.461 "traddr": "10.0.0.2", 00:19:45.461 "trsvcid": "4420" 00:19:45.461 }, 00:19:45.461 "peer_address": { 00:19:45.461 "trtype": "TCP", 00:19:45.461 "adrfam": "IPv4", 00:19:45.461 "traddr": "10.0.0.1", 00:19:45.461 "trsvcid": "44050" 00:19:45.461 }, 00:19:45.461 "auth": { 00:19:45.461 "state": "completed", 00:19:45.462 "digest": "sha384", 00:19:45.462 "dhgroup": "ffdhe8192" 00:19:45.462 } 00:19:45.462 } 00:19:45.462 ]' 00:19:45.462 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.721 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.981 14:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.551 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.812 14:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.382 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.382 { 00:19:47.382 "cntlid": 93, 00:19:47.382 "qid": 0, 00:19:47.382 "state": "enabled", 00:19:47.382 "thread": "nvmf_tgt_poll_group_000", 00:19:47.382 "listen_address": { 00:19:47.382 "trtype": "TCP", 00:19:47.382 "adrfam": "IPv4", 00:19:47.382 "traddr": "10.0.0.2", 00:19:47.382 "trsvcid": "4420" 00:19:47.382 }, 00:19:47.382 "peer_address": { 00:19:47.382 "trtype": "TCP", 00:19:47.382 "adrfam": "IPv4", 00:19:47.382 "traddr": "10.0.0.1", 00:19:47.382 "trsvcid": "60400" 00:19:47.382 }, 00:19:47.382 "auth": { 00:19:47.382 "state": "completed", 00:19:47.382 "digest": "sha384", 00:19:47.382 "dhgroup": "ffdhe8192" 00:19:47.382 } 00:19:47.382 } 00:19:47.382 ]' 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.382 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.642 14:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.582 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.583 14:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.583 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.583 14:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.152 00:19:49.152 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.152 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.152 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.412 { 00:19:49.412 "cntlid": 95, 00:19:49.412 "qid": 0, 00:19:49.412 "state": "enabled", 00:19:49.412 "thread": "nvmf_tgt_poll_group_000", 00:19:49.412 "listen_address": { 00:19:49.412 "trtype": "TCP", 00:19:49.412 "adrfam": "IPv4", 00:19:49.412 "traddr": "10.0.0.2", 00:19:49.412 "trsvcid": "4420" 00:19:49.412 }, 00:19:49.412 "peer_address": { 00:19:49.412 "trtype": "TCP", 00:19:49.412 "adrfam": "IPv4", 00:19:49.412 "traddr": "10.0.0.1", 00:19:49.412 "trsvcid": "60414" 00:19:49.412 }, 00:19:49.412 "auth": { 00:19:49.412 "state": "completed", 00:19:49.412 "digest": "sha384", 00:19:49.412 "dhgroup": "ffdhe8192" 00:19:49.412 } 00:19:49.412 } 00:19:49.412 ]' 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.412 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.672 14:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:50.242 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.502 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.763 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.763 { 00:19:50.763 "cntlid": 97, 00:19:50.763 "qid": 0, 00:19:50.763 "state": "enabled", 00:19:50.763 "thread": "nvmf_tgt_poll_group_000", 00:19:50.763 "listen_address": { 00:19:50.763 "trtype": "TCP", 00:19:50.763 "adrfam": "IPv4", 00:19:50.763 "traddr": "10.0.0.2", 00:19:50.763 "trsvcid": "4420" 00:19:50.763 }, 00:19:50.763 "peer_address": { 00:19:50.763 "trtype": "TCP", 00:19:50.763 "adrfam": "IPv4", 00:19:50.763 "traddr": "10.0.0.1", 00:19:50.763 "trsvcid": "60448" 00:19:50.763 }, 00:19:50.763 "auth": { 00:19:50.763 "state": "completed", 00:19:50.763 "digest": "sha512", 00:19:50.763 "dhgroup": "null" 00:19:50.763 } 00:19:50.763 } 00:19:50.763 ]' 00:19:50.763 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.024 14:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.285 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.857 14:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.118 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.379 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.379 { 00:19:52.379 "cntlid": 99, 00:19:52.379 "qid": 0, 00:19:52.379 "state": "enabled", 00:19:52.379 "thread": "nvmf_tgt_poll_group_000", 00:19:52.379 "listen_address": { 00:19:52.379 "trtype": "TCP", 00:19:52.379 "adrfam": "IPv4", 00:19:52.379 "traddr": "10.0.0.2", 00:19:52.379 "trsvcid": "4420" 00:19:52.379 }, 00:19:52.379 "peer_address": { 00:19:52.379 "trtype": "TCP", 00:19:52.379 "adrfam": "IPv4", 00:19:52.379 "traddr": "10.0.0.1", 00:19:52.379 "trsvcid": "60478" 00:19:52.379 }, 00:19:52.379 "auth": { 00:19:52.379 "state": "completed", 00:19:52.379 "digest": "sha512", 00:19:52.379 "dhgroup": "null" 00:19:52.379 } 00:19:52.379 } 00:19:52.379 ]' 00:19:52.379 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.640 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.901 14:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.473 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.734 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.998 00:19:53.998 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.998 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.998 14:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.998 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.998 { 00:19:53.998 "cntlid": 101, 00:19:53.998 "qid": 0, 00:19:53.998 "state": "enabled", 00:19:53.998 "thread": "nvmf_tgt_poll_group_000", 00:19:53.998 "listen_address": { 00:19:53.998 "trtype": "TCP", 00:19:53.998 "adrfam": "IPv4", 00:19:53.998 "traddr": "10.0.0.2", 00:19:53.998 "trsvcid": "4420" 00:19:53.999 }, 00:19:53.999 "peer_address": { 00:19:53.999 "trtype": "TCP", 00:19:53.999 "adrfam": "IPv4", 00:19:53.999 "traddr": "10.0.0.1", 00:19:53.999 "trsvcid": "60498" 00:19:53.999 }, 00:19:53.999 "auth": { 00:19:53.999 "state": "completed", 00:19:53.999 "digest": "sha512", 00:19:53.999 "dhgroup": "null" 00:19:53.999 } 00:19:53.999 } 00:19:53.999 ]' 00:19:53.999 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.999 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.999 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.262 14:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.205 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.466 00:19:55.466 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.466 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.466 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.728 { 00:19:55.728 "cntlid": 103, 00:19:55.728 "qid": 0, 00:19:55.728 "state": "enabled", 00:19:55.728 "thread": "nvmf_tgt_poll_group_000", 00:19:55.728 "listen_address": { 00:19:55.728 "trtype": "TCP", 00:19:55.728 "adrfam": "IPv4", 00:19:55.728 "traddr": "10.0.0.2", 00:19:55.728 "trsvcid": "4420" 00:19:55.728 }, 00:19:55.728 "peer_address": { 00:19:55.728 "trtype": "TCP", 00:19:55.728 "adrfam": "IPv4", 00:19:55.728 "traddr": "10.0.0.1", 00:19:55.728 "trsvcid": "60530" 00:19:55.728 }, 00:19:55.728 "auth": { 00:19:55.728 "state": "completed", 00:19:55.728 "digest": "sha512", 00:19:55.728 "dhgroup": "null" 00:19:55.728 } 00:19:55.728 } 00:19:55.728 ]' 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.728 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.989 14:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.560 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.820 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.821 14:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.082 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.082 14:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.343 { 00:19:57.343 "cntlid": 105, 00:19:57.343 "qid": 0, 00:19:57.343 "state": "enabled", 00:19:57.343 "thread": "nvmf_tgt_poll_group_000", 00:19:57.343 "listen_address": { 00:19:57.343 "trtype": "TCP", 00:19:57.343 "adrfam": "IPv4", 00:19:57.343 "traddr": "10.0.0.2", 00:19:57.343 "trsvcid": "4420" 00:19:57.343 }, 00:19:57.343 "peer_address": { 00:19:57.343 "trtype": "TCP", 00:19:57.343 "adrfam": "IPv4", 00:19:57.343 "traddr": "10.0.0.1", 00:19:57.343 "trsvcid": "50068" 00:19:57.343 }, 00:19:57.343 "auth": { 00:19:57.343 "state": "completed", 00:19:57.343 "digest": "sha512", 00:19:57.343 "dhgroup": "ffdhe2048" 00:19:57.343 } 00:19:57.343 } 00:19:57.343 ]' 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.343 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.603 14:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:19:58.177 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.178 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.497 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.497 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.758 { 00:19:58.758 "cntlid": 107, 00:19:58.758 "qid": 0, 00:19:58.758 "state": "enabled", 00:19:58.758 "thread": "nvmf_tgt_poll_group_000", 00:19:58.758 "listen_address": { 00:19:58.758 "trtype": "TCP", 00:19:58.758 "adrfam": "IPv4", 00:19:58.758 "traddr": "10.0.0.2", 00:19:58.758 "trsvcid": "4420" 00:19:58.758 }, 00:19:58.758 "peer_address": { 00:19:58.758 "trtype": "TCP", 00:19:58.758 "adrfam": "IPv4", 00:19:58.758 "traddr": "10.0.0.1", 00:19:58.758 "trsvcid": "50092" 00:19:58.758 }, 00:19:58.758 "auth": { 00:19:58.758 "state": "completed", 00:19:58.758 "digest": "sha512", 00:19:58.758 "dhgroup": "ffdhe2048" 00:19:58.758 } 00:19:58.758 } 00:19:58.758 ]' 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.758 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.019 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.019 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.019 14:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.019 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.961 14:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.222 00:20:00.222 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.222 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.222 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.482 { 00:20:00.482 "cntlid": 109, 00:20:00.482 "qid": 0, 00:20:00.482 "state": "enabled", 00:20:00.482 "thread": "nvmf_tgt_poll_group_000", 00:20:00.482 "listen_address": { 00:20:00.482 "trtype": "TCP", 00:20:00.482 "adrfam": "IPv4", 00:20:00.482 "traddr": "10.0.0.2", 00:20:00.482 "trsvcid": "4420" 00:20:00.482 }, 00:20:00.482 "peer_address": { 00:20:00.482 "trtype": "TCP", 00:20:00.482 "adrfam": "IPv4", 00:20:00.482 "traddr": "10.0.0.1", 00:20:00.482 "trsvcid": "50124" 00:20:00.482 }, 00:20:00.482 "auth": { 00:20:00.482 "state": "completed", 00:20:00.482 "digest": "sha512", 00:20:00.482 "dhgroup": "ffdhe2048" 00:20:00.482 } 00:20:00.482 } 00:20:00.482 ]' 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.482 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.743 14:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.347 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.607 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.867 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.867 { 00:20:01.867 "cntlid": 111, 00:20:01.867 "qid": 0, 00:20:01.867 "state": "enabled", 00:20:01.867 "thread": "nvmf_tgt_poll_group_000", 00:20:01.867 "listen_address": { 00:20:01.867 "trtype": "TCP", 00:20:01.867 "adrfam": "IPv4", 00:20:01.867 "traddr": "10.0.0.2", 00:20:01.867 "trsvcid": "4420" 00:20:01.867 }, 00:20:01.867 "peer_address": { 00:20:01.867 "trtype": "TCP", 00:20:01.867 "adrfam": "IPv4", 00:20:01.867 "traddr": "10.0.0.1", 00:20:01.867 "trsvcid": "50152" 00:20:01.867 }, 00:20:01.867 "auth": { 00:20:01.867 "state": "completed", 00:20:01.867 "digest": "sha512", 00:20:01.867 "dhgroup": "ffdhe2048" 00:20:01.867 } 00:20:01.867 } 00:20:01.867 ]' 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.867 14:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.127 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.067 14:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.067 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.327 00:20:03.327 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.327 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.327 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.587 { 00:20:03.587 "cntlid": 113, 00:20:03.587 "qid": 0, 00:20:03.587 "state": "enabled", 00:20:03.587 "thread": "nvmf_tgt_poll_group_000", 00:20:03.587 "listen_address": { 00:20:03.587 "trtype": "TCP", 00:20:03.587 "adrfam": "IPv4", 00:20:03.587 "traddr": "10.0.0.2", 00:20:03.587 "trsvcid": "4420" 00:20:03.587 }, 00:20:03.587 "peer_address": { 00:20:03.587 "trtype": "TCP", 00:20:03.587 "adrfam": "IPv4", 00:20:03.587 "traddr": "10.0.0.1", 00:20:03.587 "trsvcid": "50178" 00:20:03.587 }, 00:20:03.587 "auth": { 00:20:03.587 "state": "completed", 00:20:03.587 "digest": "sha512", 00:20:03.587 "dhgroup": "ffdhe3072" 00:20:03.587 } 00:20:03.587 } 00:20:03.587 ]' 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.587 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.847 14:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.416 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.417 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.676 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.936 00:20:04.936 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.936 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.936 14:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.202 { 00:20:05.202 "cntlid": 115, 00:20:05.202 "qid": 0, 00:20:05.202 "state": "enabled", 00:20:05.202 "thread": "nvmf_tgt_poll_group_000", 00:20:05.202 "listen_address": { 00:20:05.202 "trtype": "TCP", 00:20:05.202 "adrfam": "IPv4", 00:20:05.202 "traddr": "10.0.0.2", 00:20:05.202 "trsvcid": "4420" 00:20:05.202 }, 00:20:05.202 "peer_address": { 00:20:05.202 "trtype": "TCP", 00:20:05.202 "adrfam": "IPv4", 00:20:05.202 "traddr": "10.0.0.1", 00:20:05.202 "trsvcid": "50202" 00:20:05.202 }, 00:20:05.202 "auth": { 00:20:05.202 "state": "completed", 00:20:05.202 "digest": "sha512", 00:20:05.202 "dhgroup": "ffdhe3072" 00:20:05.202 } 00:20:05.202 } 00:20:05.202 ]' 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.202 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.463 14:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.031 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.290 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.549 00:20:06.550 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.550 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.550 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.809 { 00:20:06.809 "cntlid": 117, 00:20:06.809 "qid": 0, 00:20:06.809 "state": "enabled", 00:20:06.809 "thread": "nvmf_tgt_poll_group_000", 00:20:06.809 "listen_address": { 00:20:06.809 "trtype": "TCP", 00:20:06.809 "adrfam": "IPv4", 00:20:06.809 "traddr": "10.0.0.2", 00:20:06.809 "trsvcid": "4420" 00:20:06.809 }, 00:20:06.809 "peer_address": { 00:20:06.809 "trtype": "TCP", 00:20:06.809 "adrfam": "IPv4", 00:20:06.809 "traddr": "10.0.0.1", 00:20:06.809 "trsvcid": "44902" 00:20:06.809 }, 00:20:06.809 "auth": { 00:20:06.809 "state": "completed", 00:20:06.809 "digest": "sha512", 00:20:06.809 "dhgroup": "ffdhe3072" 00:20:06.809 } 00:20:06.809 } 00:20:06.809 ]' 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.809 14:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.068 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.638 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.897 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:07.898 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.898 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.898 14:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.898 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.898 14:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.157 00:20:08.157 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.157 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.157 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.417 { 00:20:08.417 "cntlid": 119, 00:20:08.417 "qid": 0, 00:20:08.417 "state": "enabled", 00:20:08.417 "thread": "nvmf_tgt_poll_group_000", 00:20:08.417 "listen_address": { 00:20:08.417 "trtype": "TCP", 00:20:08.417 "adrfam": "IPv4", 00:20:08.417 "traddr": "10.0.0.2", 00:20:08.417 "trsvcid": "4420" 00:20:08.417 }, 00:20:08.417 "peer_address": { 00:20:08.417 "trtype": "TCP", 00:20:08.417 "adrfam": "IPv4", 00:20:08.417 "traddr": "10.0.0.1", 00:20:08.417 "trsvcid": "44928" 00:20:08.417 }, 00:20:08.417 "auth": { 00:20:08.417 "state": "completed", 00:20:08.417 "digest": "sha512", 00:20:08.417 "dhgroup": "ffdhe3072" 00:20:08.417 } 00:20:08.417 } 00:20:08.417 ]' 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.417 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.676 14:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.245 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.505 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.764 00:20:09.764 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.764 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.765 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.024 { 00:20:10.024 "cntlid": 121, 00:20:10.024 "qid": 0, 00:20:10.024 "state": "enabled", 00:20:10.024 "thread": "nvmf_tgt_poll_group_000", 00:20:10.024 "listen_address": { 00:20:10.024 "trtype": "TCP", 00:20:10.024 "adrfam": "IPv4", 00:20:10.024 "traddr": "10.0.0.2", 00:20:10.024 "trsvcid": "4420" 00:20:10.024 }, 00:20:10.024 "peer_address": { 00:20:10.024 "trtype": "TCP", 00:20:10.024 "adrfam": "IPv4", 00:20:10.024 "traddr": "10.0.0.1", 00:20:10.024 "trsvcid": "44952" 00:20:10.024 }, 00:20:10.024 "auth": { 00:20:10.024 "state": "completed", 00:20:10.024 "digest": "sha512", 00:20:10.024 "dhgroup": "ffdhe4096" 00:20:10.024 } 00:20:10.024 } 00:20:10.024 ]' 00:20:10.024 14:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.024 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.284 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:20:10.854 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.854 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:10.854 14:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.854 14:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.127 14:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.127 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.127 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.127 14:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.127 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.386 00:20:11.386 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.386 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.386 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.647 { 00:20:11.647 "cntlid": 123, 00:20:11.647 "qid": 0, 00:20:11.647 "state": "enabled", 00:20:11.647 "thread": "nvmf_tgt_poll_group_000", 00:20:11.647 "listen_address": { 00:20:11.647 "trtype": "TCP", 00:20:11.647 "adrfam": "IPv4", 00:20:11.647 "traddr": "10.0.0.2", 00:20:11.647 "trsvcid": "4420" 00:20:11.647 }, 00:20:11.647 "peer_address": { 00:20:11.647 "trtype": "TCP", 00:20:11.647 "adrfam": "IPv4", 00:20:11.647 "traddr": "10.0.0.1", 00:20:11.647 "trsvcid": "44986" 00:20:11.647 }, 00:20:11.647 "auth": { 00:20:11.647 "state": "completed", 00:20:11.647 "digest": "sha512", 00:20:11.647 "dhgroup": "ffdhe4096" 00:20:11.647 } 00:20:11.647 } 00:20:11.647 ]' 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.647 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.907 14:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.495 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.755 14:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.015 00:20:13.015 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.015 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.015 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.276 { 00:20:13.276 "cntlid": 125, 00:20:13.276 "qid": 0, 00:20:13.276 "state": "enabled", 00:20:13.276 "thread": "nvmf_tgt_poll_group_000", 00:20:13.276 "listen_address": { 00:20:13.276 "trtype": "TCP", 00:20:13.276 "adrfam": "IPv4", 00:20:13.276 "traddr": "10.0.0.2", 00:20:13.276 "trsvcid": "4420" 00:20:13.276 }, 00:20:13.276 "peer_address": { 00:20:13.276 "trtype": "TCP", 00:20:13.276 "adrfam": "IPv4", 00:20:13.276 "traddr": "10.0.0.1", 00:20:13.276 "trsvcid": "45018" 00:20:13.276 }, 00:20:13.276 "auth": { 00:20:13.276 "state": "completed", 00:20:13.276 "digest": "sha512", 00:20:13.276 "dhgroup": "ffdhe4096" 00:20:13.276 } 00:20:13.276 } 00:20:13.276 ]' 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.276 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.537 14:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:14.107 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.367 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.627 00:20:14.627 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.627 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.627 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.887 { 00:20:14.887 "cntlid": 127, 00:20:14.887 "qid": 0, 00:20:14.887 "state": "enabled", 00:20:14.887 "thread": "nvmf_tgt_poll_group_000", 00:20:14.887 "listen_address": { 00:20:14.887 "trtype": "TCP", 00:20:14.887 "adrfam": "IPv4", 00:20:14.887 "traddr": "10.0.0.2", 00:20:14.887 "trsvcid": "4420" 00:20:14.887 }, 00:20:14.887 "peer_address": { 00:20:14.887 "trtype": "TCP", 00:20:14.887 "adrfam": "IPv4", 00:20:14.887 "traddr": "10.0.0.1", 00:20:14.887 "trsvcid": "45044" 00:20:14.887 }, 00:20:14.887 "auth": { 00:20:14.887 "state": "completed", 00:20:14.887 "digest": "sha512", 00:20:14.887 "dhgroup": "ffdhe4096" 00:20:14.887 } 00:20:14.887 } 00:20:14.887 ]' 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.887 14:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.147 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:15.716 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:15.977 14:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.977 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.546 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.546 14:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.547 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.547 { 00:20:16.547 "cntlid": 129, 00:20:16.547 "qid": 0, 00:20:16.547 "state": "enabled", 00:20:16.547 "thread": "nvmf_tgt_poll_group_000", 00:20:16.547 "listen_address": { 00:20:16.547 "trtype": "TCP", 00:20:16.547 "adrfam": "IPv4", 00:20:16.547 "traddr": "10.0.0.2", 00:20:16.547 "trsvcid": "4420" 00:20:16.547 }, 00:20:16.547 "peer_address": { 00:20:16.547 "trtype": "TCP", 00:20:16.547 "adrfam": "IPv4", 00:20:16.547 "traddr": "10.0.0.1", 00:20:16.547 "trsvcid": "37054" 00:20:16.547 }, 00:20:16.547 "auth": { 00:20:16.547 "state": "completed", 00:20:16.547 "digest": "sha512", 00:20:16.547 "dhgroup": "ffdhe6144" 00:20:16.547 } 00:20:16.547 } 00:20:16.547 ]' 00:20:16.547 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.547 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.547 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.807 14:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.751 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.752 14:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.098 00:20:18.098 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.098 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.098 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.358 { 00:20:18.358 "cntlid": 131, 00:20:18.358 "qid": 0, 00:20:18.358 "state": "enabled", 00:20:18.358 "thread": "nvmf_tgt_poll_group_000", 00:20:18.358 "listen_address": { 00:20:18.358 "trtype": "TCP", 00:20:18.358 "adrfam": "IPv4", 00:20:18.358 "traddr": "10.0.0.2", 00:20:18.358 "trsvcid": "4420" 00:20:18.358 }, 00:20:18.358 "peer_address": { 00:20:18.358 "trtype": "TCP", 00:20:18.358 "adrfam": "IPv4", 00:20:18.358 "traddr": "10.0.0.1", 00:20:18.358 "trsvcid": "37080" 00:20:18.358 }, 00:20:18.358 "auth": { 00:20:18.358 "state": "completed", 00:20:18.358 "digest": "sha512", 00:20:18.358 "dhgroup": "ffdhe6144" 00:20:18.358 } 00:20:18.358 } 00:20:18.358 ]' 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.358 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.619 14:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:20:19.560 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.561 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.820 00:20:19.821 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.821 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.821 14:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.081 { 00:20:20.081 "cntlid": 133, 00:20:20.081 "qid": 0, 00:20:20.081 "state": "enabled", 00:20:20.081 "thread": "nvmf_tgt_poll_group_000", 00:20:20.081 "listen_address": { 00:20:20.081 "trtype": "TCP", 00:20:20.081 "adrfam": "IPv4", 00:20:20.081 "traddr": "10.0.0.2", 00:20:20.081 "trsvcid": "4420" 00:20:20.081 }, 00:20:20.081 "peer_address": { 00:20:20.081 "trtype": "TCP", 00:20:20.081 "adrfam": "IPv4", 00:20:20.081 "traddr": "10.0.0.1", 00:20:20.081 "trsvcid": "37108" 00:20:20.081 }, 00:20:20.081 "auth": { 00:20:20.081 "state": "completed", 00:20:20.081 "digest": "sha512", 00:20:20.081 "dhgroup": "ffdhe6144" 00:20:20.081 } 00:20:20.081 } 00:20:20.081 ]' 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.081 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.341 14:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.281 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.541 00:20:21.541 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.541 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.541 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.800 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.800 { 00:20:21.800 "cntlid": 135, 00:20:21.800 "qid": 0, 00:20:21.800 "state": "enabled", 00:20:21.800 "thread": "nvmf_tgt_poll_group_000", 00:20:21.800 "listen_address": { 00:20:21.800 "trtype": "TCP", 00:20:21.800 "adrfam": "IPv4", 00:20:21.800 "traddr": "10.0.0.2", 00:20:21.800 "trsvcid": "4420" 00:20:21.800 }, 00:20:21.800 "peer_address": { 00:20:21.800 "trtype": "TCP", 00:20:21.800 "adrfam": "IPv4", 00:20:21.800 "traddr": "10.0.0.1", 00:20:21.801 "trsvcid": "37136" 00:20:21.801 }, 00:20:21.801 "auth": { 00:20:21.801 "state": "completed", 00:20:21.801 "digest": "sha512", 00:20:21.801 "dhgroup": "ffdhe6144" 00:20:21.801 } 00:20:21.801 } 00:20:21.801 ]' 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.801 14:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.060 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.001 14:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.570 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.570 { 00:20:23.570 "cntlid": 137, 00:20:23.570 "qid": 0, 00:20:23.570 "state": "enabled", 00:20:23.570 "thread": "nvmf_tgt_poll_group_000", 00:20:23.570 "listen_address": { 00:20:23.570 "trtype": "TCP", 00:20:23.570 "adrfam": "IPv4", 00:20:23.570 "traddr": "10.0.0.2", 00:20:23.570 "trsvcid": "4420" 00:20:23.570 }, 00:20:23.570 "peer_address": { 00:20:23.570 "trtype": "TCP", 00:20:23.570 "adrfam": "IPv4", 00:20:23.570 "traddr": "10.0.0.1", 00:20:23.570 "trsvcid": "37146" 00:20:23.570 }, 00:20:23.570 "auth": { 00:20:23.570 "state": "completed", 00:20:23.570 "digest": "sha512", 00:20:23.570 "dhgroup": "ffdhe8192" 00:20:23.570 } 00:20:23.570 } 00:20:23.570 ]' 00:20:23.570 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.831 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.091 14:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.660 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.661 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.661 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.920 14:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.489 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.489 { 00:20:25.489 "cntlid": 139, 00:20:25.489 "qid": 0, 00:20:25.489 "state": "enabled", 00:20:25.489 "thread": "nvmf_tgt_poll_group_000", 00:20:25.489 "listen_address": { 00:20:25.489 "trtype": "TCP", 00:20:25.489 "adrfam": "IPv4", 00:20:25.489 "traddr": "10.0.0.2", 00:20:25.489 "trsvcid": "4420" 00:20:25.489 }, 00:20:25.489 "peer_address": { 00:20:25.489 "trtype": "TCP", 00:20:25.489 "adrfam": "IPv4", 00:20:25.489 "traddr": "10.0.0.1", 00:20:25.489 "trsvcid": "37182" 00:20:25.489 }, 00:20:25.489 "auth": { 00:20:25.489 "state": "completed", 00:20:25.489 "digest": "sha512", 00:20:25.489 "dhgroup": "ffdhe8192" 00:20:25.489 } 00:20:25.489 } 00:20:25.489 ]' 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.489 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.750 14:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MTc2OTg4ZjYxYjhmNDhiZTA1MzU2NTkzZTM1NDdkZTThKUKO: --dhchap-ctrl-secret DHHC-1:02:N2Q0NmM5NWQxNGNkZWU0N2E0NjZiNTA0MjNiOGRhODYwNjNkZDdkY2E0NDdlMzdmYnAiIg==: 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.688 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.689 14:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.258 00:20:27.258 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.258 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.258 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.518 { 00:20:27.518 "cntlid": 141, 00:20:27.518 "qid": 0, 00:20:27.518 "state": "enabled", 00:20:27.518 "thread": "nvmf_tgt_poll_group_000", 00:20:27.518 "listen_address": { 00:20:27.518 "trtype": "TCP", 00:20:27.518 "adrfam": "IPv4", 00:20:27.518 "traddr": "10.0.0.2", 00:20:27.518 "trsvcid": "4420" 00:20:27.518 }, 00:20:27.518 "peer_address": { 00:20:27.518 "trtype": "TCP", 00:20:27.518 "adrfam": "IPv4", 00:20:27.518 "traddr": "10.0.0.1", 00:20:27.518 "trsvcid": "45478" 00:20:27.518 }, 00:20:27.518 "auth": { 00:20:27.518 "state": "completed", 00:20:27.518 "digest": "sha512", 00:20:27.518 "dhgroup": "ffdhe8192" 00:20:27.518 } 00:20:27.518 } 00:20:27.518 ]' 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.518 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.778 14:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjQ5Nzk4NTVjNTg3ODI0NTM1MDVkNjAyNzZkM2NkNGZiODM2ZmMxNzhjZTE5MmY1lv+RCQ==: --dhchap-ctrl-secret DHHC-1:01:ZTgxYTJiNWU5OTAxZDhiYzQ2OThlZGE1OWU5OTc2MGV98yU7: 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.348 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.609 14:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.182 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.182 14:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.442 { 00:20:29.442 "cntlid": 143, 00:20:29.442 "qid": 0, 00:20:29.442 "state": "enabled", 00:20:29.442 "thread": "nvmf_tgt_poll_group_000", 00:20:29.442 "listen_address": { 00:20:29.442 "trtype": "TCP", 00:20:29.442 "adrfam": "IPv4", 00:20:29.442 "traddr": "10.0.0.2", 00:20:29.442 "trsvcid": "4420" 00:20:29.442 }, 00:20:29.442 "peer_address": { 00:20:29.442 "trtype": "TCP", 00:20:29.442 "adrfam": "IPv4", 00:20:29.442 "traddr": "10.0.0.1", 00:20:29.442 "trsvcid": "45498" 00:20:29.442 }, 00:20:29.442 "auth": { 00:20:29.442 "state": "completed", 00:20:29.442 "digest": "sha512", 00:20:29.442 "dhgroup": "ffdhe8192" 00:20:29.442 } 00:20:29.442 } 00:20:29.442 ]' 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.442 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.443 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.703 14:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.274 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.534 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.535 14:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.107 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.107 { 00:20:31.107 "cntlid": 145, 00:20:31.107 "qid": 0, 00:20:31.107 "state": "enabled", 00:20:31.107 "thread": "nvmf_tgt_poll_group_000", 00:20:31.107 "listen_address": { 00:20:31.107 "trtype": "TCP", 00:20:31.107 "adrfam": "IPv4", 00:20:31.107 "traddr": "10.0.0.2", 00:20:31.107 "trsvcid": "4420" 00:20:31.107 }, 00:20:31.107 "peer_address": { 00:20:31.107 "trtype": "TCP", 00:20:31.107 "adrfam": "IPv4", 00:20:31.107 "traddr": "10.0.0.1", 00:20:31.107 "trsvcid": "45532" 00:20:31.107 }, 00:20:31.107 "auth": { 00:20:31.107 "state": "completed", 00:20:31.107 "digest": "sha512", 00:20:31.107 "dhgroup": "ffdhe8192" 00:20:31.107 } 00:20:31.107 } 00:20:31.107 ]' 00:20:31.107 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.367 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.628 14:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:Yjc1YmZmOGM2ODc4ZTQ1NmY2OTg2MTg0ZjhjNzJkMWFjY2UwYjhmMDE0ODhjZGVjo9k9yg==: --dhchap-ctrl-secret DHHC-1:03:MzcyNWYwNmI4MThlMzY3MTJjZDQwZmMwZjBlYzVjYzg4ODFmYmI2ODhiMjAxOTJmNjRmMDE1YWEzNTIwNzc5Y0bqpl0=: 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.200 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.201 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:32.772 request: 00:20:32.772 { 00:20:32.772 "name": "nvme0", 00:20:32.772 "trtype": "tcp", 00:20:32.772 "traddr": "10.0.0.2", 00:20:32.772 "adrfam": "ipv4", 00:20:32.772 "trsvcid": "4420", 00:20:32.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:32.772 "prchk_reftag": false, 00:20:32.772 "prchk_guard": false, 00:20:32.772 "hdgst": false, 00:20:32.772 "ddgst": false, 00:20:32.772 "dhchap_key": "key2", 00:20:32.772 "method": "bdev_nvme_attach_controller", 00:20:32.772 "req_id": 1 00:20:32.772 } 00:20:32.772 Got JSON-RPC error response 00:20:32.772 response: 00:20:32.772 { 00:20:32.772 "code": -5, 00:20:32.772 "message": "Input/output error" 00:20:32.772 } 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:32.772 14:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.344 request: 00:20:33.344 { 00:20:33.344 "name": "nvme0", 00:20:33.344 "trtype": "tcp", 00:20:33.344 "traddr": "10.0.0.2", 00:20:33.344 "adrfam": "ipv4", 00:20:33.344 "trsvcid": "4420", 00:20:33.344 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:33.344 "prchk_reftag": false, 00:20:33.344 "prchk_guard": false, 00:20:33.344 "hdgst": false, 00:20:33.344 "ddgst": false, 00:20:33.344 "dhchap_key": "key1", 00:20:33.344 "dhchap_ctrlr_key": "ckey2", 00:20:33.344 "method": "bdev_nvme_attach_controller", 00:20:33.344 "req_id": 1 00:20:33.344 } 00:20:33.344 Got JSON-RPC error response 00:20:33.344 response: 00:20:33.344 { 00:20:33.344 "code": -5, 00:20:33.344 "message": "Input/output error" 00:20:33.344 } 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.344 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.605 request: 00:20:33.605 { 00:20:33.605 "name": "nvme0", 00:20:33.605 "trtype": "tcp", 00:20:33.605 "traddr": "10.0.0.2", 00:20:33.605 "adrfam": "ipv4", 00:20:33.605 "trsvcid": "4420", 00:20:33.605 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:33.605 "prchk_reftag": false, 00:20:33.605 "prchk_guard": false, 00:20:33.605 "hdgst": false, 00:20:33.605 "ddgst": false, 00:20:33.605 "dhchap_key": "key1", 00:20:33.605 "dhchap_ctrlr_key": "ckey1", 00:20:33.605 "method": "bdev_nvme_attach_controller", 00:20:33.605 "req_id": 1 00:20:33.605 } 00:20:33.605 Got JSON-RPC error response 00:20:33.605 response: 00:20:33.605 { 00:20:33.605 "code": -5, 00:20:33.605 "message": "Input/output error" 00:20:33.605 } 00:20:33.605 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:33.605 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1367128 ']' 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1367128' 00:20:33.866 killing process with pid 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1367128 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1393077 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1393077 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1393077 ']' 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.866 14:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1393077 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1393077 ']' 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.808 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.069 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.069 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:35.069 14:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:35.069 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.069 14:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.069 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.641 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.641 { 00:20:35.641 "cntlid": 1, 00:20:35.641 "qid": 0, 00:20:35.641 "state": "enabled", 00:20:35.641 "thread": "nvmf_tgt_poll_group_000", 00:20:35.641 "listen_address": { 00:20:35.641 "trtype": "TCP", 00:20:35.641 "adrfam": "IPv4", 00:20:35.641 "traddr": "10.0.0.2", 00:20:35.641 "trsvcid": "4420" 00:20:35.641 }, 00:20:35.641 "peer_address": { 00:20:35.641 "trtype": "TCP", 00:20:35.641 "adrfam": "IPv4", 00:20:35.641 "traddr": "10.0.0.1", 00:20:35.641 "trsvcid": "45578" 00:20:35.641 }, 00:20:35.641 "auth": { 00:20:35.641 "state": "completed", 00:20:35.641 "digest": "sha512", 00:20:35.641 "dhgroup": "ffdhe8192" 00:20:35.641 } 00:20:35.641 } 00:20:35.641 ]' 00:20:35.641 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.902 14:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.162 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:NDAwZDQ5Zjc0YzJkNTQxM2YzMzI0MTIwZDJmZDAzMjYxMGExZjgzNmIyNmZlNDQ0NmNiYjk1NmU3MDBiZTQ4MUIJ6WI=: 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:36.734 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.995 14:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.995 request: 00:20:36.995 { 00:20:36.995 "name": "nvme0", 00:20:36.995 "trtype": "tcp", 00:20:36.995 "traddr": "10.0.0.2", 00:20:36.995 "adrfam": "ipv4", 00:20:36.995 "trsvcid": "4420", 00:20:36.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:36.995 "prchk_reftag": false, 00:20:36.995 "prchk_guard": false, 00:20:36.995 "hdgst": false, 00:20:36.995 "ddgst": false, 00:20:36.995 "dhchap_key": "key3", 00:20:36.995 "method": "bdev_nvme_attach_controller", 00:20:36.995 "req_id": 1 00:20:36.995 } 00:20:36.995 Got JSON-RPC error response 00:20:36.995 response: 00:20:36.995 { 00:20:36.995 "code": -5, 00:20:36.995 "message": "Input/output error" 00:20:36.995 } 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:36.995 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:37.256 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.256 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:37.256 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.256 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:37.256 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.257 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:37.257 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.257 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.257 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.518 request: 00:20:37.518 { 00:20:37.518 "name": "nvme0", 00:20:37.518 "trtype": "tcp", 00:20:37.518 "traddr": "10.0.0.2", 00:20:37.518 "adrfam": "ipv4", 00:20:37.518 "trsvcid": "4420", 00:20:37.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:37.518 "prchk_reftag": false, 00:20:37.518 "prchk_guard": false, 00:20:37.518 "hdgst": false, 00:20:37.518 "ddgst": false, 00:20:37.518 "dhchap_key": "key3", 00:20:37.518 "method": "bdev_nvme_attach_controller", 00:20:37.518 "req_id": 1 00:20:37.518 } 00:20:37.518 Got JSON-RPC error response 00:20:37.518 response: 00:20:37.518 { 00:20:37.518 "code": -5, 00:20:37.518 "message": "Input/output error" 00:20:37.518 } 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:37.518 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:37.784 request: 00:20:37.784 { 00:20:37.784 "name": "nvme0", 00:20:37.784 "trtype": "tcp", 00:20:37.784 "traddr": "10.0.0.2", 00:20:37.784 "adrfam": "ipv4", 00:20:37.784 "trsvcid": "4420", 00:20:37.784 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:37.784 "prchk_reftag": false, 00:20:37.784 "prchk_guard": false, 00:20:37.784 "hdgst": false, 00:20:37.784 "ddgst": false, 00:20:37.784 "dhchap_key": "key0", 00:20:37.784 "dhchap_ctrlr_key": "key1", 00:20:37.784 "method": "bdev_nvme_attach_controller", 00:20:37.784 "req_id": 1 00:20:37.784 } 00:20:37.784 Got JSON-RPC error response 00:20:37.784 response: 00:20:37.784 { 00:20:37.784 "code": -5, 00:20:37.784 "message": "Input/output error" 00:20:37.784 } 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:37.784 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.095 00:20:38.095 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:38.095 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:38.095 14:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.095 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.095 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.095 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1367300 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1367300 ']' 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1367300 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1367300 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1367300' 00:20:38.369 killing process with pid 1367300 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1367300 00:20:38.369 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1367300 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.631 rmmod nvme_tcp 00:20:38.631 rmmod nvme_fabrics 00:20:38.631 rmmod nvme_keyring 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1393077 ']' 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1393077 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1393077 ']' 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1393077 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393077 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393077' 00:20:38.631 killing process with pid 1393077 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1393077 00:20:38.631 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1393077 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.892 14:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.805 14:06:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.805 14:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.P9B /tmp/spdk.key-sha256.mRB /tmp/spdk.key-sha384.5bC /tmp/spdk.key-sha512.lvi /tmp/spdk.key-sha512.kh5 /tmp/spdk.key-sha384.0KP /tmp/spdk.key-sha256.XLu '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:40.805 00:20:40.805 real 2m21.126s 00:20:40.805 user 5m12.321s 00:20:40.805 sys 0m20.019s 00:20:40.805 14:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.805 14:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.805 ************************************ 00:20:40.805 END TEST nvmf_auth_target 00:20:40.805 ************************************ 00:20:41.066 14:06:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:41.066 14:06:38 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:41.066 14:06:38 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:41.066 14:06:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:41.066 14:06:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.066 14:06:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 ************************************ 00:20:41.066 START TEST nvmf_bdevio_no_huge 00:20:41.066 ************************************ 00:20:41.066 14:06:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:41.066 * Looking for test storage... 00:20:41.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.066 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.067 14:06:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.221 14:06:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:49.221 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.221 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:49.222 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:49.222 Found net devices under 0000:31:00.0: cvl_0_0 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:49.222 Found net devices under 0000:31:00.1: cvl_0_1 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:20:49.222 00:20:49.222 --- 10.0.0.2 ping statistics --- 00:20:49.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.222 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:20:49.222 00:20:49.222 --- 10.0.0.1 ping statistics --- 00:20:49.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.222 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.222 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1398813 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1398813 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1398813 ']' 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.483 14:06:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:49.483 [2024-07-15 14:06:47.419365] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:49.483 [2024-07-15 14:06:47.419420] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:49.483 [2024-07-15 14:06:47.518804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.744 [2024-07-15 14:06:47.623248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.744 [2024-07-15 14:06:47.623302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.744 [2024-07-15 14:06:47.623310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.744 [2024-07-15 14:06:47.623317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.744 [2024-07-15 14:06:47.623323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.744 [2024-07-15 14:06:47.623497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:49.744 [2024-07-15 14:06:47.623630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:49.744 [2024-07-15 14:06:47.623806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.744 [2024-07-15 14:06:47.623807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 [2024-07-15 14:06:48.256569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 Malloc0 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 [2024-07-15 14:06:48.310348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:50.315 { 00:20:50.315 "params": { 00:20:50.315 "name": "Nvme$subsystem", 00:20:50.315 "trtype": "$TEST_TRANSPORT", 00:20:50.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.315 "adrfam": "ipv4", 00:20:50.315 "trsvcid": "$NVMF_PORT", 00:20:50.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.315 "hdgst": ${hdgst:-false}, 00:20:50.315 "ddgst": ${ddgst:-false} 00:20:50.315 }, 00:20:50.315 "method": "bdev_nvme_attach_controller" 00:20:50.315 } 00:20:50.315 EOF 00:20:50.315 )") 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:50.315 14:06:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:50.315 "params": { 00:20:50.315 "name": "Nvme1", 00:20:50.315 "trtype": "tcp", 00:20:50.315 "traddr": "10.0.0.2", 00:20:50.315 "adrfam": "ipv4", 00:20:50.315 "trsvcid": "4420", 00:20:50.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.315 "hdgst": false, 00:20:50.315 "ddgst": false 00:20:50.315 }, 00:20:50.315 "method": "bdev_nvme_attach_controller" 00:20:50.315 }' 00:20:50.315 [2024-07-15 14:06:48.364323] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:50.315 [2024-07-15 14:06:48.364396] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1398993 ] 00:20:50.575 [2024-07-15 14:06:48.443116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:50.575 [2024-07-15 14:06:48.540057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.575 [2024-07-15 14:06:48.540177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.575 [2024-07-15 14:06:48.540180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.835 I/O targets: 00:20:50.835 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:50.835 00:20:50.835 00:20:50.835 CUnit - A unit testing framework for C - Version 2.1-3 00:20:50.835 http://cunit.sourceforge.net/ 00:20:50.835 00:20:50.835 00:20:50.835 Suite: bdevio tests on: Nvme1n1 00:20:50.835 Test: blockdev write read block ...passed 00:20:50.835 Test: blockdev write zeroes read block ...passed 00:20:50.835 Test: blockdev write zeroes read no split ...passed 00:20:50.835 Test: blockdev write zeroes read split ...passed 00:20:50.835 Test: blockdev write zeroes read split partial ...passed 00:20:50.835 Test: blockdev reset ...[2024-07-15 14:06:48.852523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.835 [2024-07-15 14:06:48.852588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad970 (9): Bad file descriptor 00:20:51.095 [2024-07-15 14:06:48.949517] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.095 passed 00:20:51.095 Test: blockdev write read 8 blocks ...passed 00:20:51.095 Test: blockdev write read size > 128k ...passed 00:20:51.095 Test: blockdev write read invalid size ...passed 00:20:51.095 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:51.095 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:51.095 Test: blockdev write read max offset ...passed 00:20:51.095 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:51.095 Test: blockdev writev readv 8 blocks ...passed 00:20:51.095 Test: blockdev writev readv 30 x 1block ...passed 00:20:51.354 Test: blockdev writev readv block ...passed 00:20:51.354 Test: blockdev writev readv size > 128k ...passed 00:20:51.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:51.354 Test: blockdev comparev and writev ...[2024-07-15 14:06:49.256300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.256325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.256341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.256826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.256834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.256844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.256849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.257354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.257363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.257369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.257843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.257850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.257860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:51.354 [2024-07-15 14:06:49.257865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:51.354 passed 00:20:51.354 Test: blockdev nvme passthru rw ...passed 00:20:51.354 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:06:49.342720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.354 [2024-07-15 14:06:49.342729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.343127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.354 [2024-07-15 14:06:49.343134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.343485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.354 [2024-07-15 14:06:49.343492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:51.354 [2024-07-15 14:06:49.343870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.354 [2024-07-15 14:06:49.343876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:51.354 passed 00:20:51.354 Test: blockdev nvme admin passthru ...passed 00:20:51.354 Test: blockdev copy ...passed 00:20:51.354 00:20:51.354 Run Summary: Type Total Ran Passed Failed Inactive 00:20:51.354 suites 1 1 n/a 0 0 00:20:51.354 tests 23 23 23 0 0 00:20:51.354 asserts 152 152 152 0 n/a 00:20:51.354 00:20:51.354 Elapsed time = 1.383 seconds 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.615 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.615 rmmod nvme_tcp 00:20:51.615 rmmod nvme_fabrics 00:20:51.615 rmmod nvme_keyring 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1398813 ']' 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1398813 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1398813 ']' 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1398813 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398813 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:51.875 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:51.876 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398813' 00:20:51.876 killing process with pid 1398813 00:20:51.876 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1398813 00:20:51.876 14:06:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1398813 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.136 14:06:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.053 14:06:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.053 00:20:54.053 real 0m13.126s 00:20:54.053 user 0m14.101s 00:20:54.053 sys 0m7.019s 00:20:54.053 14:06:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.053 14:06:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:54.053 ************************************ 00:20:54.053 END TEST nvmf_bdevio_no_huge 00:20:54.053 ************************************ 00:20:54.053 14:06:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:54.053 14:06:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:54.053 14:06:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.053 14:06:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.053 14:06:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.314 ************************************ 00:20:54.314 START TEST nvmf_tls 00:20:54.314 ************************************ 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:54.314 * Looking for test storage... 00:20:54.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.314 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.315 14:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:02.451 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:02.451 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:02.451 Found net devices under 0000:31:00.0: cvl_0_0 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:02.451 Found net devices under 0000:31:00.1: cvl_0_1 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.451 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:21:02.713 00:21:02.713 --- 10.0.0.2 ping statistics --- 00:21:02.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.713 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:21:02.713 00:21:02.713 --- 10.0.0.1 ping statistics --- 00:21:02.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.713 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1403952 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1403952 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1403952 ']' 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.713 14:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.713 [2024-07-15 14:07:00.683822] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:02.713 [2024-07-15 14:07:00.683883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.713 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.713 [2024-07-15 14:07:00.782265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.974 [2024-07-15 14:07:00.875198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.974 [2024-07-15 14:07:00.875264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.974 [2024-07-15 14:07:00.875273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.974 [2024-07-15 14:07:00.875279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.974 [2024-07-15 14:07:00.875285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.974 [2024-07-15 14:07:00.875313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:03.545 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:03.806 true 00:21:03.806 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:03.806 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:03.806 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:03.806 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:03.806 14:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:04.067 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:04.067 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:04.328 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:04.328 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:04.328 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:04.328 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:04.328 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:04.589 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:04.589 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:04.589 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:04.589 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:04.849 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:04.849 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:04.849 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:04.849 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:04.849 14:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:05.109 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:05.109 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:05.109 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RBPbb565AO 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.OJaqmqz6YB 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RBPbb565AO 00:21:05.369 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OJaqmqz6YB 00:21:05.629 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:05.629 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:05.889 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RBPbb565AO 00:21:05.889 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RBPbb565AO 00:21:05.889 14:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.889 [2024-07-15 14:07:03.998985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.149 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.149 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.409 [2024-07-15 14:07:04.303712] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.409 [2024-07-15 14:07:04.303895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.409 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.409 malloc0 00:21:06.409 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.669 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RBPbb565AO 00:21:06.669 [2024-07-15 14:07:04.770776] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.928 14:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RBPbb565AO 00:21:06.928 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.988 Initializing NVMe Controllers 00:21:16.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:16.988 Initialization complete. Launching workers. 00:21:16.988 ======================================================== 00:21:16.988 Latency(us) 00:21:16.988 Device Information : IOPS MiB/s Average min max 00:21:16.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19092.58 74.58 3352.11 1150.40 4038.40 00:21:16.988 ======================================================== 00:21:16.988 Total : 19092.58 74.58 3352.11 1150.40 4038.40 00:21:16.988 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBPbb565AO 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RBPbb565AO' 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1406834 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1406834 /var/tmp/bdevperf.sock 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1406834 ']' 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.988 14:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.988 [2024-07-15 14:07:14.920179] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:16.988 [2024-07-15 14:07:14.920232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406834 ] 00:21:16.988 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.988 [2024-07-15 14:07:14.975105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.988 [2024-07-15 14:07:15.027928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.929 14:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.929 14:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.929 14:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RBPbb565AO 00:21:17.929 [2024-07-15 14:07:15.840021] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.929 [2024-07-15 14:07:15.840073] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:17.929 TLSTESTn1 00:21:17.929 14:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:17.929 Running I/O for 10 seconds... 00:21:30.155 00:21:30.155 Latency(us) 00:21:30.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.155 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:30.155 Verification LBA range: start 0x0 length 0x2000 00:21:30.155 TLSTESTn1 : 10.01 5674.65 22.17 0.00 0.00 22525.02 4505.60 48933.55 00:21:30.155 =================================================================================================================== 00:21:30.155 Total : 5674.65 22.17 0.00 0.00 22525.02 4505.60 48933.55 00:21:30.155 0 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1406834 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1406834 ']' 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1406834 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1406834 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1406834' 00:21:30.155 killing process with pid 1406834 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1406834 00:21:30.155 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.155 00:21:30.155 Latency(us) 00:21:30.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.155 =================================================================================================================== 00:21:30.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.155 [2024-07-15 14:07:26.134819] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1406834 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OJaqmqz6YB 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OJaqmqz6YB 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OJaqmqz6YB 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OJaqmqz6YB' 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1408938 00:21:30.155 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1408938 /var/tmp/bdevperf.sock 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1408938 ']' 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.156 14:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 [2024-07-15 14:07:26.308597] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:30.156 [2024-07-15 14:07:26.308662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408938 ] 00:21:30.156 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.156 [2024-07-15 14:07:26.364668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.156 [2024-07-15 14:07:26.416469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OJaqmqz6YB 00:21:30.156 [2024-07-15 14:07:27.200409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.156 [2024-07-15 14:07:27.200465] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:30.156 [2024-07-15 14:07:27.206772] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:30.156 [2024-07-15 14:07:27.207480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf0d80 (107): Transport endpoint is not connected 00:21:30.156 [2024-07-15 14:07:27.208477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf0d80 (9): Bad file descriptor 00:21:30.156 [2024-07-15 14:07:27.209479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.156 [2024-07-15 14:07:27.209485] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:30.156 [2024-07-15 14:07:27.209492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.156 request: 00:21:30.156 { 00:21:30.156 "name": "TLSTEST", 00:21:30.156 "trtype": "tcp", 00:21:30.156 "traddr": "10.0.0.2", 00:21:30.156 "adrfam": "ipv4", 00:21:30.156 "trsvcid": "4420", 00:21:30.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.156 "prchk_reftag": false, 00:21:30.156 "prchk_guard": false, 00:21:30.156 "hdgst": false, 00:21:30.156 "ddgst": false, 00:21:30.156 "psk": "/tmp/tmp.OJaqmqz6YB", 00:21:30.156 "method": "bdev_nvme_attach_controller", 00:21:30.156 "req_id": 1 00:21:30.156 } 00:21:30.156 Got JSON-RPC error response 00:21:30.156 response: 00:21:30.156 { 00:21:30.156 "code": -5, 00:21:30.156 "message": "Input/output error" 00:21:30.156 } 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1408938 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1408938 ']' 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1408938 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408938 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408938' 00:21:30.156 killing process with pid 1408938 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1408938 00:21:30.156 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.156 00:21:30.156 Latency(us) 00:21:30.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.156 =================================================================================================================== 00:21:30.156 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:30.156 [2024-07-15 14:07:27.278661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1408938 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RBPbb565AO 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RBPbb565AO 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RBPbb565AO 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RBPbb565AO' 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409274 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409274 /var/tmp/bdevperf.sock 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1409274 ']' 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.156 14:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 [2024-07-15 14:07:27.438312] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:30.156 [2024-07-15 14:07:27.438366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409274 ] 00:21:30.156 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.156 [2024-07-15 14:07:27.494193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.156 [2024-07-15 14:07:27.545288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.156 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.156 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.156 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RBPbb565AO 00:21:30.417 [2024-07-15 14:07:28.349382] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.417 [2024-07-15 14:07:28.349443] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:30.417 [2024-07-15 14:07:28.359489] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:30.417 [2024-07-15 14:07:28.359508] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:30.417 [2024-07-15 14:07:28.359528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:30.417 [2024-07-15 14:07:28.360372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d74d80 (107): Transport endpoint is not connected 00:21:30.417 [2024-07-15 14:07:28.361367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d74d80 (9): Bad file descriptor 00:21:30.417 [2024-07-15 14:07:28.362369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.417 [2024-07-15 14:07:28.362376] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:30.417 [2024-07-15 14:07:28.362383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.417 request: 00:21:30.417 { 00:21:30.417 "name": "TLSTEST", 00:21:30.417 "trtype": "tcp", 00:21:30.417 "traddr": "10.0.0.2", 00:21:30.417 "adrfam": "ipv4", 00:21:30.417 "trsvcid": "4420", 00:21:30.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.417 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:30.417 "prchk_reftag": false, 00:21:30.417 "prchk_guard": false, 00:21:30.417 "hdgst": false, 00:21:30.417 "ddgst": false, 00:21:30.417 "psk": "/tmp/tmp.RBPbb565AO", 00:21:30.417 "method": "bdev_nvme_attach_controller", 00:21:30.417 "req_id": 1 00:21:30.417 } 00:21:30.417 Got JSON-RPC error response 00:21:30.417 response: 00:21:30.417 { 00:21:30.417 "code": -5, 00:21:30.417 "message": "Input/output error" 00:21:30.417 } 00:21:30.417 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1409274 00:21:30.417 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1409274 ']' 00:21:30.417 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1409274 00:21:30.417 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409274 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409274' 00:21:30.418 killing process with pid 1409274 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1409274 00:21:30.418 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.418 00:21:30.418 Latency(us) 00:21:30.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.418 =================================================================================================================== 00:21:30.418 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:30.418 [2024-07-15 14:07:28.450263] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:30.418 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1409274 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBPbb565AO 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBPbb565AO 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBPbb565AO 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RBPbb565AO' 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409462 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409462 /var/tmp/bdevperf.sock 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1409462 ']' 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.678 14:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.678 [2024-07-15 14:07:28.608242] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:30.678 [2024-07-15 14:07:28.608297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409462 ] 00:21:30.678 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.678 [2024-07-15 14:07:28.664321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.678 [2024-07-15 14:07:28.716570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RBPbb565AO 00:21:31.626 [2024-07-15 14:07:29.512746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.626 [2024-07-15 14:07:29.512811] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.626 [2024-07-15 14:07:29.522315] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:31.626 [2024-07-15 14:07:29.522333] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:31.626 [2024-07-15 14:07:29.522352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:31.626 [2024-07-15 14:07:29.522960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d80 (107): Transport endpoint is not connected 00:21:31.626 [2024-07-15 14:07:29.523956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d80 (9): Bad file descriptor 00:21:31.626 [2024-07-15 14:07:29.524958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:31.626 [2024-07-15 14:07:29.524965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:31.626 [2024-07-15 14:07:29.524971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:31.626 request: 00:21:31.626 { 00:21:31.626 "name": "TLSTEST", 00:21:31.626 "trtype": "tcp", 00:21:31.626 "traddr": "10.0.0.2", 00:21:31.626 "adrfam": "ipv4", 00:21:31.626 "trsvcid": "4420", 00:21:31.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.626 "prchk_reftag": false, 00:21:31.626 "prchk_guard": false, 00:21:31.626 "hdgst": false, 00:21:31.626 "ddgst": false, 00:21:31.626 "psk": "/tmp/tmp.RBPbb565AO", 00:21:31.626 "method": "bdev_nvme_attach_controller", 00:21:31.626 "req_id": 1 00:21:31.626 } 00:21:31.626 Got JSON-RPC error response 00:21:31.626 response: 00:21:31.626 { 00:21:31.626 "code": -5, 00:21:31.626 "message": "Input/output error" 00:21:31.626 } 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1409462 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1409462 ']' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1409462 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409462 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409462' 00:21:31.626 killing process with pid 1409462 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1409462 00:21:31.626 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.626 00:21:31.626 Latency(us) 00:21:31.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.626 =================================================================================================================== 00:21:31.626 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.626 [2024-07-15 14:07:29.610442] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1409462 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.626 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1409632 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1409632 /var/tmp/bdevperf.sock 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1409632 ']' 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.627 14:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.888 [2024-07-15 14:07:29.765635] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:31.888 [2024-07-15 14:07:29.765688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409632 ] 00:21:31.888 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.888 [2024-07-15 14:07:29.821343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.888 [2024-07-15 14:07:29.872757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.460 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.460 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:32.460 14:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:32.721 [2024-07-15 14:07:30.685151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:32.721 [2024-07-15 14:07:30.686395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bb460 (9): Bad file descriptor 00:21:32.721 [2024-07-15 14:07:30.687396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:32.721 [2024-07-15 14:07:30.687410] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:32.721 [2024-07-15 14:07:30.687417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.721 request: 00:21:32.721 { 00:21:32.721 "name": "TLSTEST", 00:21:32.721 "trtype": "tcp", 00:21:32.721 "traddr": "10.0.0.2", 00:21:32.721 "adrfam": "ipv4", 00:21:32.721 "trsvcid": "4420", 00:21:32.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.721 "prchk_reftag": false, 00:21:32.721 "prchk_guard": false, 00:21:32.721 "hdgst": false, 00:21:32.721 "ddgst": false, 00:21:32.721 "method": "bdev_nvme_attach_controller", 00:21:32.721 "req_id": 1 00:21:32.721 } 00:21:32.721 Got JSON-RPC error response 00:21:32.721 response: 00:21:32.721 { 00:21:32.721 "code": -5, 00:21:32.721 "message": "Input/output error" 00:21:32.721 } 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1409632 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1409632 ']' 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1409632 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409632 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409632' 00:21:32.721 killing process with pid 1409632 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1409632 00:21:32.721 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.721 00:21:32.721 Latency(us) 00:21:32.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.721 =================================================================================================================== 00:21:32.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.721 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1409632 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1403952 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1403952 ']' 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1403952 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1403952 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1403952' 00:21:32.991 killing process with pid 1403952 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1403952 00:21:32.991 [2024-07-15 14:07:30.934445] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:32.991 14:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1403952 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.tYucyz5m3v 00:21:32.991 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.tYucyz5m3v 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1409991 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1409991 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1409991 ']' 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.256 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.256 [2024-07-15 14:07:31.166890] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:33.256 [2024-07-15 14:07:31.166948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.256 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.256 [2024-07-15 14:07:31.256512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.256 [2024-07-15 14:07:31.312071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.256 [2024-07-15 14:07:31.312103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.256 [2024-07-15 14:07:31.312109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.256 [2024-07-15 14:07:31.312114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.256 [2024-07-15 14:07:31.312118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.256 [2024-07-15 14:07:31.312132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.825 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.825 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:33.825 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.825 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.825 14:07:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.085 14:07:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.085 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:21:34.085 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tYucyz5m3v 00:21:34.085 14:07:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.085 [2024-07-15 14:07:32.105333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.085 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:34.345 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:34.345 [2024-07-15 14:07:32.402050] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.345 [2024-07-15 14:07:32.402210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.345 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:34.605 malloc0 00:21:34.605 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:34.864 [2024-07-15 14:07:32.849120] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tYucyz5m3v 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tYucyz5m3v' 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1410346 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1410346 /var/tmp/bdevperf.sock 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1410346 ']' 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.864 14:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.864 [2024-07-15 14:07:32.894957] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:34.864 [2024-07-15 14:07:32.895005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410346 ] 00:21:34.864 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.864 [2024-07-15 14:07:32.953548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.124 [2024-07-15 14:07:33.005671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.124 14:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.124 14:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:35.124 14:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:35.124 [2024-07-15 14:07:33.220371] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.124 [2024-07-15 14:07:33.220427] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:35.383 TLSTESTn1 00:21:35.383 14:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.383 Running I/O for 10 seconds... 00:21:45.370 00:21:45.370 Latency(us) 00:21:45.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:45.370 Verification LBA range: start 0x0 length 0x2000 00:21:45.370 TLSTESTn1 : 10.04 5558.97 21.71 0.00 0.00 22973.31 4478.29 31894.19 00:21:45.370 =================================================================================================================== 00:21:45.370 Total : 5558.97 21.71 0.00 0.00 22973.31 4478.29 31894.19 00:21:45.370 0 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1410346 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1410346 ']' 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1410346 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410346 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410346' 00:21:45.630 killing process with pid 1410346 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1410346 00:21:45.630 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.630 00:21:45.630 Latency(us) 00:21:45.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.630 =================================================================================================================== 00:21:45.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.630 [2024-07-15 14:07:43.543083] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1410346 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.tYucyz5m3v 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tYucyz5m3v 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tYucyz5m3v 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tYucyz5m3v 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tYucyz5m3v' 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1412363 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1412363 /var/tmp/bdevperf.sock 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1412363 ']' 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.630 14:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.630 [2024-07-15 14:07:43.711252] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:45.630 [2024-07-15 14:07:43.711306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412363 ] 00:21:45.630 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.891 [2024-07-15 14:07:43.767556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.891 [2024-07-15 14:07:43.819789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.461 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.461 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:46.461 14:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:46.722 [2024-07-15 14:07:44.628070] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.722 [2024-07-15 14:07:44.628111] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:46.722 [2024-07-15 14:07:44.628117] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.tYucyz5m3v 00:21:46.722 request: 00:21:46.722 { 00:21:46.722 "name": "TLSTEST", 00:21:46.722 "trtype": "tcp", 00:21:46.722 "traddr": "10.0.0.2", 00:21:46.722 "adrfam": "ipv4", 00:21:46.722 "trsvcid": "4420", 00:21:46.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.722 "prchk_reftag": false, 00:21:46.722 "prchk_guard": false, 00:21:46.722 "hdgst": false, 00:21:46.722 "ddgst": false, 00:21:46.722 "psk": "/tmp/tmp.tYucyz5m3v", 00:21:46.722 "method": "bdev_nvme_attach_controller", 00:21:46.722 "req_id": 1 00:21:46.722 } 00:21:46.722 Got JSON-RPC error response 00:21:46.722 response: 00:21:46.722 { 00:21:46.722 "code": -1, 00:21:46.722 "message": "Operation not permitted" 00:21:46.722 } 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1412363 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1412363 ']' 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1412363 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1412363 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1412363' 00:21:46.722 killing process with pid 1412363 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1412363 00:21:46.722 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.722 00:21:46.722 Latency(us) 00:21:46.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.722 =================================================================================================================== 00:21:46.722 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1412363 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1409991 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1409991 ']' 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1409991 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.722 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409991 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409991' 00:21:46.982 killing process with pid 1409991 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1409991 00:21:46.982 [2024-07-15 14:07:44.873981] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1409991 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.982 14:07:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1412702 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1412702 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1412702 ']' 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.982 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.982 [2024-07-15 14:07:45.050731] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:46.983 [2024-07-15 14:07:45.050799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.983 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.243 [2024-07-15 14:07:45.139219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.243 [2024-07-15 14:07:45.192315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.243 [2024-07-15 14:07:45.192351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.243 [2024-07-15 14:07:45.192357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.243 [2024-07-15 14:07:45.192361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.243 [2024-07-15 14:07:45.192365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.243 [2024-07-15 14:07:45.192386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tYucyz5m3v 00:21:47.813 14:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.073 [2024-07-15 14:07:45.993571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.073 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:48.073 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:48.333 [2024-07-15 14:07:46.286276] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.333 [2024-07-15 14:07:46.286444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.333 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:48.333 malloc0 00:21:48.333 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:48.593 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:48.853 [2024-07-15 14:07:46.708987] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:48.853 [2024-07-15 14:07:46.709004] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:48.853 [2024-07-15 14:07:46.709024] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:48.853 request: 00:21:48.853 { 00:21:48.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.853 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.853 "psk": "/tmp/tmp.tYucyz5m3v", 00:21:48.853 "method": "nvmf_subsystem_add_host", 00:21:48.853 "req_id": 1 00:21:48.853 } 00:21:48.853 Got JSON-RPC error response 00:21:48.853 response: 00:21:48.853 { 00:21:48.853 "code": -32603, 00:21:48.853 "message": "Internal error" 00:21:48.853 } 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1412702 ']' 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1412702' 00:21:48.853 killing process with pid 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1412702 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.tYucyz5m3v 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1413076 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1413076 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1413076 ']' 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.853 14:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.113 [2024-07-15 14:07:46.971713] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:49.113 [2024-07-15 14:07:46.971774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.113 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.113 [2024-07-15 14:07:47.060836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.113 [2024-07-15 14:07:47.114531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.113 [2024-07-15 14:07:47.114559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.113 [2024-07-15 14:07:47.114564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.113 [2024-07-15 14:07:47.114569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.113 [2024-07-15 14:07:47.114573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.113 [2024-07-15 14:07:47.114592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tYucyz5m3v 00:21:49.683 14:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.984 [2024-07-15 14:07:47.896075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.984 14:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.984 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:50.291 [2024-07-15 14:07:48.188777] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.291 [2024-07-15 14:07:48.188941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.291 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.291 malloc0 00:21:50.291 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:50.551 [2024-07-15 14:07:48.615484] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1413444 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1413444 /var/tmp/bdevperf.sock 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1413444 ']' 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.551 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.551 [2024-07-15 14:07:48.659523] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:50.551 [2024-07-15 14:07:48.659576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413444 ] 00:21:50.811 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.811 [2024-07-15 14:07:48.715057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.811 [2024-07-15 14:07:48.766843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.811 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.811 14:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:50.811 14:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:21:51.071 [2024-07-15 14:07:48.985707] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.071 [2024-07-15 14:07:48.985775] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.071 TLSTESTn1 00:21:51.071 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:51.332 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:51.332 "subsystems": [ 00:21:51.332 { 00:21:51.332 "subsystem": "keyring", 00:21:51.332 "config": [] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "iobuf", 00:21:51.332 "config": [ 00:21:51.332 { 00:21:51.332 "method": "iobuf_set_options", 00:21:51.332 "params": { 00:21:51.332 "small_pool_count": 8192, 00:21:51.332 "large_pool_count": 1024, 00:21:51.332 "small_bufsize": 8192, 00:21:51.332 "large_bufsize": 135168 00:21:51.332 } 00:21:51.332 } 00:21:51.332 ] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "sock", 00:21:51.332 "config": [ 00:21:51.332 { 00:21:51.332 "method": "sock_set_default_impl", 00:21:51.332 "params": { 00:21:51.332 "impl_name": "posix" 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "sock_impl_set_options", 00:21:51.332 "params": { 00:21:51.332 "impl_name": "ssl", 00:21:51.332 "recv_buf_size": 4096, 00:21:51.332 "send_buf_size": 4096, 00:21:51.332 "enable_recv_pipe": true, 00:21:51.332 "enable_quickack": false, 00:21:51.332 "enable_placement_id": 0, 00:21:51.332 "enable_zerocopy_send_server": true, 00:21:51.332 "enable_zerocopy_send_client": false, 00:21:51.332 "zerocopy_threshold": 0, 00:21:51.332 "tls_version": 0, 00:21:51.332 "enable_ktls": false 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "sock_impl_set_options", 00:21:51.332 "params": { 00:21:51.332 "impl_name": "posix", 00:21:51.332 "recv_buf_size": 2097152, 00:21:51.332 "send_buf_size": 2097152, 00:21:51.332 "enable_recv_pipe": true, 00:21:51.332 "enable_quickack": false, 00:21:51.332 "enable_placement_id": 0, 00:21:51.332 "enable_zerocopy_send_server": true, 00:21:51.332 "enable_zerocopy_send_client": false, 00:21:51.332 "zerocopy_threshold": 0, 00:21:51.332 "tls_version": 0, 00:21:51.332 "enable_ktls": false 00:21:51.332 } 00:21:51.332 } 00:21:51.332 ] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "vmd", 00:21:51.332 "config": [] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "accel", 00:21:51.332 "config": [ 00:21:51.332 { 00:21:51.332 "method": "accel_set_options", 00:21:51.332 "params": { 00:21:51.332 "small_cache_size": 128, 00:21:51.332 "large_cache_size": 16, 00:21:51.332 "task_count": 2048, 00:21:51.332 "sequence_count": 2048, 00:21:51.332 "buf_count": 2048 00:21:51.332 } 00:21:51.332 } 00:21:51.332 ] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "bdev", 00:21:51.332 "config": [ 00:21:51.332 { 00:21:51.332 "method": "bdev_set_options", 00:21:51.332 "params": { 00:21:51.332 "bdev_io_pool_size": 65535, 00:21:51.332 "bdev_io_cache_size": 256, 00:21:51.332 "bdev_auto_examine": true, 00:21:51.332 "iobuf_small_cache_size": 128, 00:21:51.332 "iobuf_large_cache_size": 16 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_raid_set_options", 00:21:51.332 "params": { 00:21:51.332 "process_window_size_kb": 1024 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_iscsi_set_options", 00:21:51.332 "params": { 00:21:51.332 "timeout_sec": 30 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_nvme_set_options", 00:21:51.332 "params": { 00:21:51.332 "action_on_timeout": "none", 00:21:51.332 "timeout_us": 0, 00:21:51.332 "timeout_admin_us": 0, 00:21:51.332 "keep_alive_timeout_ms": 10000, 00:21:51.332 "arbitration_burst": 0, 00:21:51.332 "low_priority_weight": 0, 00:21:51.332 "medium_priority_weight": 0, 00:21:51.332 "high_priority_weight": 0, 00:21:51.332 "nvme_adminq_poll_period_us": 10000, 00:21:51.332 "nvme_ioq_poll_period_us": 0, 00:21:51.332 "io_queue_requests": 0, 00:21:51.332 "delay_cmd_submit": true, 00:21:51.332 "transport_retry_count": 4, 00:21:51.332 "bdev_retry_count": 3, 00:21:51.332 "transport_ack_timeout": 0, 00:21:51.332 "ctrlr_loss_timeout_sec": 0, 00:21:51.332 "reconnect_delay_sec": 0, 00:21:51.332 "fast_io_fail_timeout_sec": 0, 00:21:51.332 "disable_auto_failback": false, 00:21:51.332 "generate_uuids": false, 00:21:51.332 "transport_tos": 0, 00:21:51.332 "nvme_error_stat": false, 00:21:51.332 "rdma_srq_size": 0, 00:21:51.332 "io_path_stat": false, 00:21:51.332 "allow_accel_sequence": false, 00:21:51.332 "rdma_max_cq_size": 0, 00:21:51.332 "rdma_cm_event_timeout_ms": 0, 00:21:51.332 "dhchap_digests": [ 00:21:51.332 "sha256", 00:21:51.332 "sha384", 00:21:51.332 "sha512" 00:21:51.332 ], 00:21:51.332 "dhchap_dhgroups": [ 00:21:51.332 "null", 00:21:51.332 "ffdhe2048", 00:21:51.332 "ffdhe3072", 00:21:51.332 "ffdhe4096", 00:21:51.332 "ffdhe6144", 00:21:51.332 "ffdhe8192" 00:21:51.332 ] 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_nvme_set_hotplug", 00:21:51.332 "params": { 00:21:51.332 "period_us": 100000, 00:21:51.332 "enable": false 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_malloc_create", 00:21:51.332 "params": { 00:21:51.332 "name": "malloc0", 00:21:51.332 "num_blocks": 8192, 00:21:51.332 "block_size": 4096, 00:21:51.332 "physical_block_size": 4096, 00:21:51.332 "uuid": "9282ec8a-01e9-421a-8d51-9dba5f746906", 00:21:51.332 "optimal_io_boundary": 0 00:21:51.332 } 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "method": "bdev_wait_for_examine" 00:21:51.332 } 00:21:51.332 ] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "nbd", 00:21:51.332 "config": [] 00:21:51.332 }, 00:21:51.332 { 00:21:51.332 "subsystem": "scheduler", 00:21:51.332 "config": [ 00:21:51.333 { 00:21:51.333 "method": "framework_set_scheduler", 00:21:51.333 "params": { 00:21:51.333 "name": "static" 00:21:51.333 } 00:21:51.333 } 00:21:51.333 ] 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "subsystem": "nvmf", 00:21:51.333 "config": [ 00:21:51.333 { 00:21:51.333 "method": "nvmf_set_config", 00:21:51.333 "params": { 00:21:51.333 "discovery_filter": "match_any", 00:21:51.333 "admin_cmd_passthru": { 00:21:51.333 "identify_ctrlr": false 00:21:51.333 } 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_set_max_subsystems", 00:21:51.333 "params": { 00:21:51.333 "max_subsystems": 1024 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_set_crdt", 00:21:51.333 "params": { 00:21:51.333 "crdt1": 0, 00:21:51.333 "crdt2": 0, 00:21:51.333 "crdt3": 0 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_create_transport", 00:21:51.333 "params": { 00:21:51.333 "trtype": "TCP", 00:21:51.333 "max_queue_depth": 128, 00:21:51.333 "max_io_qpairs_per_ctrlr": 127, 00:21:51.333 "in_capsule_data_size": 4096, 00:21:51.333 "max_io_size": 131072, 00:21:51.333 "io_unit_size": 131072, 00:21:51.333 "max_aq_depth": 128, 00:21:51.333 "num_shared_buffers": 511, 00:21:51.333 "buf_cache_size": 4294967295, 00:21:51.333 "dif_insert_or_strip": false, 00:21:51.333 "zcopy": false, 00:21:51.333 "c2h_success": false, 00:21:51.333 "sock_priority": 0, 00:21:51.333 "abort_timeout_sec": 1, 00:21:51.333 "ack_timeout": 0, 00:21:51.333 "data_wr_pool_size": 0 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_create_subsystem", 00:21:51.333 "params": { 00:21:51.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.333 "allow_any_host": false, 00:21:51.333 "serial_number": "SPDK00000000000001", 00:21:51.333 "model_number": "SPDK bdev Controller", 00:21:51.333 "max_namespaces": 10, 00:21:51.333 "min_cntlid": 1, 00:21:51.333 "max_cntlid": 65519, 00:21:51.333 "ana_reporting": false 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_subsystem_add_host", 00:21:51.333 "params": { 00:21:51.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.333 "host": "nqn.2016-06.io.spdk:host1", 00:21:51.333 "psk": "/tmp/tmp.tYucyz5m3v" 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_subsystem_add_ns", 00:21:51.333 "params": { 00:21:51.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.333 "namespace": { 00:21:51.333 "nsid": 1, 00:21:51.333 "bdev_name": "malloc0", 00:21:51.333 "nguid": "9282EC8A01E9421A8D519DBA5F746906", 00:21:51.333 "uuid": "9282ec8a-01e9-421a-8d51-9dba5f746906", 00:21:51.333 "no_auto_visible": false 00:21:51.333 } 00:21:51.333 } 00:21:51.333 }, 00:21:51.333 { 00:21:51.333 "method": "nvmf_subsystem_add_listener", 00:21:51.333 "params": { 00:21:51.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.333 "listen_address": { 00:21:51.333 "trtype": "TCP", 00:21:51.333 "adrfam": "IPv4", 00:21:51.333 "traddr": "10.0.0.2", 00:21:51.333 "trsvcid": "4420" 00:21:51.333 }, 00:21:51.333 "secure_channel": true 00:21:51.333 } 00:21:51.333 } 00:21:51.333 ] 00:21:51.333 } 00:21:51.333 ] 00:21:51.333 }' 00:21:51.333 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:51.593 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:51.593 "subsystems": [ 00:21:51.593 { 00:21:51.593 "subsystem": "keyring", 00:21:51.593 "config": [] 00:21:51.593 }, 00:21:51.593 { 00:21:51.593 "subsystem": "iobuf", 00:21:51.593 "config": [ 00:21:51.593 { 00:21:51.593 "method": "iobuf_set_options", 00:21:51.593 "params": { 00:21:51.593 "small_pool_count": 8192, 00:21:51.593 "large_pool_count": 1024, 00:21:51.593 "small_bufsize": 8192, 00:21:51.593 "large_bufsize": 135168 00:21:51.593 } 00:21:51.593 } 00:21:51.593 ] 00:21:51.593 }, 00:21:51.593 { 00:21:51.593 "subsystem": "sock", 00:21:51.593 "config": [ 00:21:51.593 { 00:21:51.593 "method": "sock_set_default_impl", 00:21:51.593 "params": { 00:21:51.593 "impl_name": "posix" 00:21:51.593 } 00:21:51.593 }, 00:21:51.593 { 00:21:51.593 "method": "sock_impl_set_options", 00:21:51.593 "params": { 00:21:51.593 "impl_name": "ssl", 00:21:51.593 "recv_buf_size": 4096, 00:21:51.593 "send_buf_size": 4096, 00:21:51.593 "enable_recv_pipe": true, 00:21:51.593 "enable_quickack": false, 00:21:51.593 "enable_placement_id": 0, 00:21:51.593 "enable_zerocopy_send_server": true, 00:21:51.593 "enable_zerocopy_send_client": false, 00:21:51.593 "zerocopy_threshold": 0, 00:21:51.593 "tls_version": 0, 00:21:51.593 "enable_ktls": false 00:21:51.593 } 00:21:51.593 }, 00:21:51.593 { 00:21:51.593 "method": "sock_impl_set_options", 00:21:51.593 "params": { 00:21:51.593 "impl_name": "posix", 00:21:51.593 "recv_buf_size": 2097152, 00:21:51.593 "send_buf_size": 2097152, 00:21:51.593 "enable_recv_pipe": true, 00:21:51.593 "enable_quickack": false, 00:21:51.593 "enable_placement_id": 0, 00:21:51.593 "enable_zerocopy_send_server": true, 00:21:51.593 "enable_zerocopy_send_client": false, 00:21:51.593 "zerocopy_threshold": 0, 00:21:51.593 "tls_version": 0, 00:21:51.593 "enable_ktls": false 00:21:51.593 } 00:21:51.593 } 00:21:51.593 ] 00:21:51.593 }, 00:21:51.593 { 00:21:51.593 "subsystem": "vmd", 00:21:51.594 "config": [] 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "subsystem": "accel", 00:21:51.594 "config": [ 00:21:51.594 { 00:21:51.594 "method": "accel_set_options", 00:21:51.594 "params": { 00:21:51.594 "small_cache_size": 128, 00:21:51.594 "large_cache_size": 16, 00:21:51.594 "task_count": 2048, 00:21:51.594 "sequence_count": 2048, 00:21:51.594 "buf_count": 2048 00:21:51.594 } 00:21:51.594 } 00:21:51.594 ] 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "subsystem": "bdev", 00:21:51.594 "config": [ 00:21:51.594 { 00:21:51.594 "method": "bdev_set_options", 00:21:51.594 "params": { 00:21:51.594 "bdev_io_pool_size": 65535, 00:21:51.594 "bdev_io_cache_size": 256, 00:21:51.594 "bdev_auto_examine": true, 00:21:51.594 "iobuf_small_cache_size": 128, 00:21:51.594 "iobuf_large_cache_size": 16 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_raid_set_options", 00:21:51.594 "params": { 00:21:51.594 "process_window_size_kb": 1024 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_iscsi_set_options", 00:21:51.594 "params": { 00:21:51.594 "timeout_sec": 30 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_nvme_set_options", 00:21:51.594 "params": { 00:21:51.594 "action_on_timeout": "none", 00:21:51.594 "timeout_us": 0, 00:21:51.594 "timeout_admin_us": 0, 00:21:51.594 "keep_alive_timeout_ms": 10000, 00:21:51.594 "arbitration_burst": 0, 00:21:51.594 "low_priority_weight": 0, 00:21:51.594 "medium_priority_weight": 0, 00:21:51.594 "high_priority_weight": 0, 00:21:51.594 "nvme_adminq_poll_period_us": 10000, 00:21:51.594 "nvme_ioq_poll_period_us": 0, 00:21:51.594 "io_queue_requests": 512, 00:21:51.594 "delay_cmd_submit": true, 00:21:51.594 "transport_retry_count": 4, 00:21:51.594 "bdev_retry_count": 3, 00:21:51.594 "transport_ack_timeout": 0, 00:21:51.594 "ctrlr_loss_timeout_sec": 0, 00:21:51.594 "reconnect_delay_sec": 0, 00:21:51.594 "fast_io_fail_timeout_sec": 0, 00:21:51.594 "disable_auto_failback": false, 00:21:51.594 "generate_uuids": false, 00:21:51.594 "transport_tos": 0, 00:21:51.594 "nvme_error_stat": false, 00:21:51.594 "rdma_srq_size": 0, 00:21:51.594 "io_path_stat": false, 00:21:51.594 "allow_accel_sequence": false, 00:21:51.594 "rdma_max_cq_size": 0, 00:21:51.594 "rdma_cm_event_timeout_ms": 0, 00:21:51.594 "dhchap_digests": [ 00:21:51.594 "sha256", 00:21:51.594 "sha384", 00:21:51.594 "sha512" 00:21:51.594 ], 00:21:51.594 "dhchap_dhgroups": [ 00:21:51.594 "null", 00:21:51.594 "ffdhe2048", 00:21:51.594 "ffdhe3072", 00:21:51.594 "ffdhe4096", 00:21:51.594 "ffdhe6144", 00:21:51.594 "ffdhe8192" 00:21:51.594 ] 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_nvme_attach_controller", 00:21:51.594 "params": { 00:21:51.594 "name": "TLSTEST", 00:21:51.594 "trtype": "TCP", 00:21:51.594 "adrfam": "IPv4", 00:21:51.594 "traddr": "10.0.0.2", 00:21:51.594 "trsvcid": "4420", 00:21:51.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.594 "prchk_reftag": false, 00:21:51.594 "prchk_guard": false, 00:21:51.594 "ctrlr_loss_timeout_sec": 0, 00:21:51.594 "reconnect_delay_sec": 0, 00:21:51.594 "fast_io_fail_timeout_sec": 0, 00:21:51.594 "psk": "/tmp/tmp.tYucyz5m3v", 00:21:51.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.594 "hdgst": false, 00:21:51.594 "ddgst": false 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_nvme_set_hotplug", 00:21:51.594 "params": { 00:21:51.594 "period_us": 100000, 00:21:51.594 "enable": false 00:21:51.594 } 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "method": "bdev_wait_for_examine" 00:21:51.594 } 00:21:51.594 ] 00:21:51.594 }, 00:21:51.594 { 00:21:51.594 "subsystem": "nbd", 00:21:51.594 "config": [] 00:21:51.594 } 00:21:51.594 ] 00:21:51.594 }' 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1413444 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1413444 ']' 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1413444 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1413444 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1413444' 00:21:51.594 killing process with pid 1413444 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1413444 00:21:51.594 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.594 00:21:51.594 Latency(us) 00:21:51.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.594 =================================================================================================================== 00:21:51.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:51.594 [2024-07-15 14:07:49.609029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:51.594 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1413444 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1413076 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1413076 ']' 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1413076 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1413076 00:21:51.855 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1413076' 00:21:51.856 killing process with pid 1413076 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1413076 00:21:51.856 [2024-07-15 14:07:49.773916] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1413076 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.856 14:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:51.856 "subsystems": [ 00:21:51.856 { 00:21:51.856 "subsystem": "keyring", 00:21:51.856 "config": [] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "iobuf", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "iobuf_set_options", 00:21:51.856 "params": { 00:21:51.856 "small_pool_count": 8192, 00:21:51.856 "large_pool_count": 1024, 00:21:51.856 "small_bufsize": 8192, 00:21:51.856 "large_bufsize": 135168 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "sock", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "sock_set_default_impl", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "posix" 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "sock_impl_set_options", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "ssl", 00:21:51.856 "recv_buf_size": 4096, 00:21:51.856 "send_buf_size": 4096, 00:21:51.856 "enable_recv_pipe": true, 00:21:51.856 "enable_quickack": false, 00:21:51.856 "enable_placement_id": 0, 00:21:51.856 "enable_zerocopy_send_server": true, 00:21:51.856 "enable_zerocopy_send_client": false, 00:21:51.856 "zerocopy_threshold": 0, 00:21:51.856 "tls_version": 0, 00:21:51.856 "enable_ktls": false 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "sock_impl_set_options", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "posix", 00:21:51.856 "recv_buf_size": 2097152, 00:21:51.856 "send_buf_size": 2097152, 00:21:51.856 "enable_recv_pipe": true, 00:21:51.856 "enable_quickack": false, 00:21:51.856 "enable_placement_id": 0, 00:21:51.856 "enable_zerocopy_send_server": true, 00:21:51.856 "enable_zerocopy_send_client": false, 00:21:51.856 "zerocopy_threshold": 0, 00:21:51.856 "tls_version": 0, 00:21:51.856 "enable_ktls": false 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "vmd", 00:21:51.856 "config": [] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "accel", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "accel_set_options", 00:21:51.856 "params": { 00:21:51.856 "small_cache_size": 128, 00:21:51.856 "large_cache_size": 16, 00:21:51.856 "task_count": 2048, 00:21:51.856 "sequence_count": 2048, 00:21:51.856 "buf_count": 2048 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "bdev", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "bdev_set_options", 00:21:51.856 "params": { 00:21:51.856 "bdev_io_pool_size": 65535, 00:21:51.856 "bdev_io_cache_size": 256, 00:21:51.856 "bdev_auto_examine": true, 00:21:51.856 "iobuf_small_cache_size": 128, 00:21:51.856 "iobuf_large_cache_size": 16 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_raid_set_options", 00:21:51.856 "params": { 00:21:51.856 "process_window_size_kb": 1024 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_iscsi_set_options", 00:21:51.856 "params": { 00:21:51.856 "timeout_sec": 30 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_nvme_set_options", 00:21:51.856 "params": { 00:21:51.856 "action_on_timeout": "none", 00:21:51.856 "timeout_us": 0, 00:21:51.856 "timeout_admin_us": 0, 00:21:51.856 "keep_alive_timeout_ms": 10000, 00:21:51.856 "arbitration_burst": 0, 00:21:51.856 "low_priority_weight": 0, 00:21:51.856 "medium_priority_weight": 0, 00:21:51.856 "high_priority_weight": 0, 00:21:51.856 "nvme_adminq_poll_period_us": 10000, 00:21:51.856 "nvme_ioq_poll_period_us": 0, 00:21:51.856 "io_queue_requests": 0, 00:21:51.856 "delay_cmd_submit": true, 00:21:51.856 "transport_retry_count": 4, 00:21:51.856 "bdev_retry_count": 3, 00:21:51.856 "transport_ack_timeout": 0, 00:21:51.856 "ctrlr_loss_timeout_sec": 0, 00:21:51.856 "reconnect_delay_sec": 0, 00:21:51.856 "fast_io_fail_timeout_sec": 0, 00:21:51.856 "disable_auto_failback": false, 00:21:51.856 "generate_uuids": false, 00:21:51.856 "transport_tos": 0, 00:21:51.856 "nvme_error_stat": false, 00:21:51.856 "rdma_srq_size": 0, 00:21:51.856 "io_path_stat": false, 00:21:51.856 "allow_accel_sequence": false, 00:21:51.856 "rdma_max_cq_size": 0, 00:21:51.856 "rdma_cm_event_timeout_ms": 0, 00:21:51.856 "dhchap_digests": [ 00:21:51.856 "sha256", 00:21:51.856 "sha384", 00:21:51.856 "sha512" 00:21:51.856 ], 00:21:51.856 "dhchap_dhgroups": [ 00:21:51.856 "null", 00:21:51.856 "ffdhe2048", 00:21:51.856 "ffdhe3072", 00:21:51.856 "ffdhe4096", 00:21:51.856 "ffdhe6144", 00:21:51.856 "ffdhe8192" 00:21:51.856 ] 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_nvme_set_hotplug", 00:21:51.856 "params": { 00:21:51.856 "period_us": 100000, 00:21:51.856 "enable": false 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_malloc_create", 00:21:51.856 "params": { 00:21:51.856 "name": "malloc0", 00:21:51.856 "num_blocks": 8192, 00:21:51.856 "block_size": 4096, 00:21:51.856 "physical_block_size": 4096, 00:21:51.856 "uuid": "9282ec8a-01e9-421a-8d51-9dba5f746906", 00:21:51.856 "optimal_io_boundary": 0 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_wait_for_examine" 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "nbd", 00:21:51.856 "config": [] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "scheduler", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "framework_set_scheduler", 00:21:51.856 "params": { 00:21:51.856 "name": "static" 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "nvmf", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "nvmf_set_config", 00:21:51.856 "params": { 00:21:51.856 "discovery_filter": "match_any", 00:21:51.856 "admin_cmd_passthru": { 00:21:51.856 "identify_ctrlr": false 00:21:51.856 } 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "nvmf_set_max_subsystems", 00:21:51.856 "params": { 00:21:51.856 "max_subsystems": 1024 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "nvmf_set_crdt", 00:21:51.856 "params": { 00:21:51.856 "crdt1": 0, 00:21:51.856 "crdt2": 0, 00:21:51.856 "crdt3": 0 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "nvmf_create_transport", 00:21:51.857 "params": { 00:21:51.857 "trtype": "TCP", 00:21:51.857 "max_queue_depth": 128, 00:21:51.857 "max_io_qpairs_per_ctrlr": 127, 00:21:51.857 "in_capsule_data_size": 4096, 00:21:51.857 "max_io_size": 131072, 00:21:51.857 "io_unit_size": 131072, 00:21:51.857 "max_aq_depth": 128, 00:21:51.857 "num_shared_buffers": 511, 00:21:51.857 "buf_cache_size": 4294967295, 00:21:51.857 "dif_insert_or_strip": false, 00:21:51.857 "zcopy": false, 00:21:51.857 "c2h_success": false, 00:21:51.857 "sock_priority": 0, 00:21:51.857 "abort_timeout_sec": 1, 00:21:51.857 "ack_timeout": 0, 00:21:51.857 "data_wr_pool_size": 0 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "nvmf_create_subsystem", 00:21:51.857 "params": { 00:21:51.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.857 "allow_any_host": false, 00:21:51.857 "serial_number": "SPDK00000000000001", 00:21:51.857 "model_number": "SPDK bdev Controller", 00:21:51.857 "max_namespaces": 10, 00:21:51.857 "min_cntlid": 1, 00:21:51.857 "max_cntlid": 65519, 00:21:51.857 "ana_reporting": false 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "nvmf_subsystem_add_host", 00:21:51.857 "params": { 00:21:51.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.857 "host": "nqn.2016-06.io.spdk:host1", 00:21:51.857 "psk": "/tmp/tmp.tYucyz5m3v" 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "nvmf_subsystem_add_ns", 00:21:51.857 "params": { 00:21:51.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.857 "namespace": { 00:21:51.857 "nsid": 1, 00:21:51.857 "bdev_name": "malloc0", 00:21:51.857 "nguid": "9282EC8A01E9421A8D519DBA5F746906", 00:21:51.857 "uuid": "9282ec8a-01e9-421a-8d51-9dba5f746906", 00:21:51.857 "no_auto_visible": false 00:21:51.857 } 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "nvmf_subsystem_add_listener", 00:21:51.857 "params": { 00:21:51.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.857 "listen_address": { 00:21:51.857 "trtype": "TCP", 00:21:51.857 "adrfam": "IPv4", 00:21:51.857 "traddr": "10.0.0.2", 00:21:51.857 "trsvcid": "4420" 00:21:51.857 }, 00:21:51.857 "secure_channel": true 00:21:51.857 } 00:21:51.857 } 00:21:51.857 ] 00:21:51.857 } 00:21:51.857 ] 00:21:51.857 }' 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1413786 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1413786 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1413786 ']' 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.857 14:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.857 [2024-07-15 14:07:49.954337] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:51.857 [2024-07-15 14:07:49.954393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.117 [2024-07-15 14:07:50.046537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.117 [2024-07-15 14:07:50.100668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.117 [2024-07-15 14:07:50.100704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.117 [2024-07-15 14:07:50.100710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.117 [2024-07-15 14:07:50.100714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.117 [2024-07-15 14:07:50.100718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.117 [2024-07-15 14:07:50.100775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.378 [2024-07-15 14:07:50.283385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.378 [2024-07-15 14:07:50.299362] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:52.378 [2024-07-15 14:07:50.315409] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.378 [2024-07-15 14:07:50.324047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1413918 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1413918 /var/tmp/bdevperf.sock 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1413918 ']' 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.638 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.639 14:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:52.639 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.899 14:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.899 14:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:52.899 "subsystems": [ 00:21:52.899 { 00:21:52.899 "subsystem": "keyring", 00:21:52.899 "config": [] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "iobuf", 00:21:52.899 "config": [ 00:21:52.899 { 00:21:52.899 "method": "iobuf_set_options", 00:21:52.899 "params": { 00:21:52.899 "small_pool_count": 8192, 00:21:52.899 "large_pool_count": 1024, 00:21:52.899 "small_bufsize": 8192, 00:21:52.899 "large_bufsize": 135168 00:21:52.899 } 00:21:52.899 } 00:21:52.899 ] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "sock", 00:21:52.899 "config": [ 00:21:52.899 { 00:21:52.899 "method": "sock_set_default_impl", 00:21:52.899 "params": { 00:21:52.899 "impl_name": "posix" 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "sock_impl_set_options", 00:21:52.899 "params": { 00:21:52.899 "impl_name": "ssl", 00:21:52.899 "recv_buf_size": 4096, 00:21:52.899 "send_buf_size": 4096, 00:21:52.899 "enable_recv_pipe": true, 00:21:52.899 "enable_quickack": false, 00:21:52.899 "enable_placement_id": 0, 00:21:52.899 "enable_zerocopy_send_server": true, 00:21:52.899 "enable_zerocopy_send_client": false, 00:21:52.899 "zerocopy_threshold": 0, 00:21:52.899 "tls_version": 0, 00:21:52.899 "enable_ktls": false 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "sock_impl_set_options", 00:21:52.899 "params": { 00:21:52.899 "impl_name": "posix", 00:21:52.899 "recv_buf_size": 2097152, 00:21:52.899 "send_buf_size": 2097152, 00:21:52.899 "enable_recv_pipe": true, 00:21:52.899 "enable_quickack": false, 00:21:52.899 "enable_placement_id": 0, 00:21:52.899 "enable_zerocopy_send_server": true, 00:21:52.899 "enable_zerocopy_send_client": false, 00:21:52.899 "zerocopy_threshold": 0, 00:21:52.899 "tls_version": 0, 00:21:52.899 "enable_ktls": false 00:21:52.899 } 00:21:52.899 } 00:21:52.899 ] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "vmd", 00:21:52.899 "config": [] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "accel", 00:21:52.899 "config": [ 00:21:52.899 { 00:21:52.899 "method": "accel_set_options", 00:21:52.899 "params": { 00:21:52.899 "small_cache_size": 128, 00:21:52.899 "large_cache_size": 16, 00:21:52.899 "task_count": 2048, 00:21:52.899 "sequence_count": 2048, 00:21:52.899 "buf_count": 2048 00:21:52.899 } 00:21:52.899 } 00:21:52.899 ] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "bdev", 00:21:52.899 "config": [ 00:21:52.899 { 00:21:52.899 "method": "bdev_set_options", 00:21:52.899 "params": { 00:21:52.899 "bdev_io_pool_size": 65535, 00:21:52.899 "bdev_io_cache_size": 256, 00:21:52.899 "bdev_auto_examine": true, 00:21:52.899 "iobuf_small_cache_size": 128, 00:21:52.899 "iobuf_large_cache_size": 16 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_raid_set_options", 00:21:52.899 "params": { 00:21:52.899 "process_window_size_kb": 1024 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_iscsi_set_options", 00:21:52.899 "params": { 00:21:52.899 "timeout_sec": 30 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_nvme_set_options", 00:21:52.899 "params": { 00:21:52.899 "action_on_timeout": "none", 00:21:52.899 "timeout_us": 0, 00:21:52.899 "timeout_admin_us": 0, 00:21:52.899 "keep_alive_timeout_ms": 10000, 00:21:52.899 "arbitration_burst": 0, 00:21:52.899 "low_priority_weight": 0, 00:21:52.899 "medium_priority_weight": 0, 00:21:52.899 "high_priority_weight": 0, 00:21:52.899 "nvme_adminq_poll_period_us": 10000, 00:21:52.899 "nvme_ioq_poll_period_us": 0, 00:21:52.899 "io_queue_requests": 512, 00:21:52.899 "delay_cmd_submit": true, 00:21:52.899 "transport_retry_count": 4, 00:21:52.899 "bdev_retry_count": 3, 00:21:52.899 "transport_ack_timeout": 0, 00:21:52.899 "ctrlr_loss_timeout_sec": 0, 00:21:52.899 "reconnect_delay_sec": 0, 00:21:52.899 "fast_io_fail_timeout_sec": 0, 00:21:52.899 "disable_auto_failback": false, 00:21:52.899 "generate_uuids": false, 00:21:52.899 "transport_tos": 0, 00:21:52.899 "nvme_error_stat": false, 00:21:52.899 "rdma_srq_size": 0, 00:21:52.899 "io_path_stat": false, 00:21:52.899 "allow_accel_sequence": false, 00:21:52.899 "rdma_max_cq_size": 0, 00:21:52.899 "rdma_cm_event_timeout_ms": 0, 00:21:52.899 "dhchap_digests": [ 00:21:52.899 "sha256", 00:21:52.899 "sha384", 00:21:52.899 "sha512" 00:21:52.899 ], 00:21:52.899 "dhchap_dhgroups": [ 00:21:52.899 "null", 00:21:52.899 "ffdhe2048", 00:21:52.899 "ffdhe3072", 00:21:52.899 "ffdhe4096", 00:21:52.899 "ffdhe6144", 00:21:52.899 "ffdhe8192" 00:21:52.899 ] 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_nvme_attach_controller", 00:21:52.899 "params": { 00:21:52.899 "name": "TLSTEST", 00:21:52.899 "trtype": "TCP", 00:21:52.899 "adrfam": "IPv4", 00:21:52.899 "traddr": "10.0.0.2", 00:21:52.899 "trsvcid": "4420", 00:21:52.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.899 "prchk_reftag": false, 00:21:52.899 "prchk_guard": false, 00:21:52.899 "ctrlr_loss_timeout_sec": 0, 00:21:52.899 "reconnect_delay_sec": 0, 00:21:52.899 "fast_io_fail_timeout_sec": 0, 00:21:52.899 "psk": "/tmp/tmp.tYucyz5m3v", 00:21:52.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.899 "hdgst": false, 00:21:52.899 "ddgst": false 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_nvme_set_hotplug", 00:21:52.899 "params": { 00:21:52.899 "period_us": 100000, 00:21:52.899 "enable": false 00:21:52.899 } 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "method": "bdev_wait_for_examine" 00:21:52.899 } 00:21:52.899 ] 00:21:52.899 }, 00:21:52.899 { 00:21:52.899 "subsystem": "nbd", 00:21:52.899 "config": [] 00:21:52.899 } 00:21:52.899 ] 00:21:52.899 }' 00:21:52.899 [2024-07-15 14:07:50.797022] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:52.899 [2024-07-15 14:07:50.797072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413918 ] 00:21:52.899 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.899 [2024-07-15 14:07:50.852028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.899 [2024-07-15 14:07:50.904735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.159 [2024-07-15 14:07:51.028907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.159 [2024-07-15 14:07:51.028966] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:53.728 14:07:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.728 14:07:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:53.729 14:07:51 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:53.729 Running I/O for 10 seconds... 00:22:03.719 00:22:03.719 Latency(us) 00:22:03.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.719 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:03.719 Verification LBA range: start 0x0 length 0x2000 00:22:03.719 TLSTESTn1 : 10.01 6215.64 24.28 0.00 0.00 20563.43 5406.72 31457.28 00:22:03.719 =================================================================================================================== 00:22:03.719 Total : 6215.64 24.28 0.00 0.00 20563.43 5406.72 31457.28 00:22:03.719 0 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1413918 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1413918 ']' 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1413918 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1413918 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:03.719 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:03.720 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1413918' 00:22:03.720 killing process with pid 1413918 00:22:03.720 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1413918 00:22:03.720 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.720 00:22:03.720 Latency(us) 00:22:03.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.720 =================================================================================================================== 00:22:03.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.720 [2024-07-15 14:08:01.748111] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.720 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1413918 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1413786 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1413786 ']' 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1413786 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1413786 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1413786' 00:22:03.979 killing process with pid 1413786 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1413786 00:22:03.979 [2024-07-15 14:08:01.914927] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.979 14:08:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1413786 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1416158 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1416158 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1416158 ']' 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.979 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.979 [2024-07-15 14:08:02.091784] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:03.979 [2024-07-15 14:08:02.091835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.239 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.239 [2024-07-15 14:08:02.163732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.239 [2024-07-15 14:08:02.226634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.239 [2024-07-15 14:08:02.226672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.239 [2024-07-15 14:08:02.226680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.239 [2024-07-15 14:08:02.226686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.239 [2024-07-15 14:08:02.226691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.239 [2024-07-15 14:08:02.226712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.tYucyz5m3v 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tYucyz5m3v 00:22:04.808 14:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.067 [2024-07-15 14:08:03.033201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.067 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.327 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.327 [2024-07-15 14:08:03.370036] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.327 [2024-07-15 14:08:03.370232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.327 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.587 malloc0 00:22:05.587 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tYucyz5m3v 00:22:05.847 [2024-07-15 14:08:03.861950] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1416519 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1416519 /var/tmp/bdevperf.sock 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1416519 ']' 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.847 14:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.847 [2024-07-15 14:08:03.938897] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:05.847 [2024-07-15 14:08:03.938963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416519 ] 00:22:06.107 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.107 [2024-07-15 14:08:04.020772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.107 [2024-07-15 14:08:04.074053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.679 14:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.679 14:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:06.679 14:08:04 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tYucyz5m3v 00:22:06.940 14:08:04 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:06.940 [2024-07-15 14:08:04.995324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.201 nvme0n1 00:22:07.201 14:08:05 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.201 Running I/O for 1 seconds... 00:22:08.143 00:22:08.143 Latency(us) 00:22:08.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.144 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:08.144 Verification LBA range: start 0x0 length 0x2000 00:22:08.144 nvme0n1 : 1.02 4638.15 18.12 0.00 0.00 27350.22 6389.76 83449.17 00:22:08.144 =================================================================================================================== 00:22:08.144 Total : 4638.15 18.12 0.00 0.00 27350.22 6389.76 83449.17 00:22:08.144 0 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1416519 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1416519 ']' 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1416519 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416519 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416519' 00:22:08.144 killing process with pid 1416519 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1416519 00:22:08.144 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.144 00:22:08.144 Latency(us) 00:22:08.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.144 =================================================================================================================== 00:22:08.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.144 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1416519 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1416158 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1416158 ']' 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1416158 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416158 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416158' 00:22:08.404 killing process with pid 1416158 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1416158 00:22:08.404 [2024-07-15 14:08:06.419964] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.404 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1416158 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1417088 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1417088 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1417088 ']' 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.666 14:08:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.666 [2024-07-15 14:08:06.619253] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:08.666 [2024-07-15 14:08:06.619307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.666 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.666 [2024-07-15 14:08:06.691076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.666 [2024-07-15 14:08:06.754243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.666 [2024-07-15 14:08:06.754279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.666 [2024-07-15 14:08:06.754287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.666 [2024-07-15 14:08:06.754293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.666 [2024-07-15 14:08:06.754299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.666 [2024-07-15 14:08:06.754319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.610 [2024-07-15 14:08:07.444385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.610 malloc0 00:22:09.610 [2024-07-15 14:08:07.471160] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.610 [2024-07-15 14:08:07.471357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1417222 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1417222 /var/tmp/bdevperf.sock 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1417222 ']' 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.610 14:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.610 [2024-07-15 14:08:07.546883] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:09.610 [2024-07-15 14:08:07.546928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417222 ] 00:22:09.610 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.610 [2024-07-15 14:08:07.628351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.610 [2024-07-15 14:08:07.681718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.551 14:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.551 14:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:10.551 14:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tYucyz5m3v 00:22:10.551 14:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:10.551 [2024-07-15 14:08:08.603051] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.810 nvme0n1 00:22:10.811 14:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.811 Running I/O for 1 seconds... 00:22:11.750 00:22:11.750 Latency(us) 00:22:11.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.750 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:11.750 Verification LBA range: start 0x0 length 0x2000 00:22:11.750 nvme0n1 : 1.06 4755.54 18.58 0.00 0.00 26279.93 6007.47 52647.25 00:22:11.750 =================================================================================================================== 00:22:11.750 Total : 4755.54 18.58 0.00 0.00 26279.93 6007.47 52647.25 00:22:11.750 0 00:22:11.750 14:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:11.750 14:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.750 14:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.010 14:08:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.010 14:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:12.010 "subsystems": [ 00:22:12.010 { 00:22:12.010 "subsystem": "keyring", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "keyring_file_add_key", 00:22:12.010 "params": { 00:22:12.010 "name": "key0", 00:22:12.010 "path": "/tmp/tmp.tYucyz5m3v" 00:22:12.010 } 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "iobuf", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "iobuf_set_options", 00:22:12.010 "params": { 00:22:12.010 "small_pool_count": 8192, 00:22:12.010 "large_pool_count": 1024, 00:22:12.010 "small_bufsize": 8192, 00:22:12.010 "large_bufsize": 135168 00:22:12.010 } 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "sock", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "sock_set_default_impl", 00:22:12.010 "params": { 00:22:12.010 "impl_name": "posix" 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "sock_impl_set_options", 00:22:12.010 "params": { 00:22:12.010 "impl_name": "ssl", 00:22:12.010 "recv_buf_size": 4096, 00:22:12.010 "send_buf_size": 4096, 00:22:12.010 "enable_recv_pipe": true, 00:22:12.010 "enable_quickack": false, 00:22:12.010 "enable_placement_id": 0, 00:22:12.010 "enable_zerocopy_send_server": true, 00:22:12.010 "enable_zerocopy_send_client": false, 00:22:12.010 "zerocopy_threshold": 0, 00:22:12.010 "tls_version": 0, 00:22:12.010 "enable_ktls": false 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "sock_impl_set_options", 00:22:12.010 "params": { 00:22:12.010 "impl_name": "posix", 00:22:12.010 "recv_buf_size": 2097152, 00:22:12.010 "send_buf_size": 2097152, 00:22:12.010 "enable_recv_pipe": true, 00:22:12.010 "enable_quickack": false, 00:22:12.010 "enable_placement_id": 0, 00:22:12.010 "enable_zerocopy_send_server": true, 00:22:12.010 "enable_zerocopy_send_client": false, 00:22:12.010 "zerocopy_threshold": 0, 00:22:12.010 "tls_version": 0, 00:22:12.010 "enable_ktls": false 00:22:12.010 } 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "vmd", 00:22:12.010 "config": [] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "accel", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "accel_set_options", 00:22:12.010 "params": { 00:22:12.010 "small_cache_size": 128, 00:22:12.010 "large_cache_size": 16, 00:22:12.010 "task_count": 2048, 00:22:12.010 "sequence_count": 2048, 00:22:12.010 "buf_count": 2048 00:22:12.010 } 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "bdev", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "bdev_set_options", 00:22:12.010 "params": { 00:22:12.010 "bdev_io_pool_size": 65535, 00:22:12.010 "bdev_io_cache_size": 256, 00:22:12.010 "bdev_auto_examine": true, 00:22:12.010 "iobuf_small_cache_size": 128, 00:22:12.010 "iobuf_large_cache_size": 16 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_raid_set_options", 00:22:12.010 "params": { 00:22:12.010 "process_window_size_kb": 1024 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_iscsi_set_options", 00:22:12.010 "params": { 00:22:12.010 "timeout_sec": 30 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_nvme_set_options", 00:22:12.010 "params": { 00:22:12.010 "action_on_timeout": "none", 00:22:12.010 "timeout_us": 0, 00:22:12.010 "timeout_admin_us": 0, 00:22:12.010 "keep_alive_timeout_ms": 10000, 00:22:12.010 "arbitration_burst": 0, 00:22:12.010 "low_priority_weight": 0, 00:22:12.010 "medium_priority_weight": 0, 00:22:12.010 "high_priority_weight": 0, 00:22:12.010 "nvme_adminq_poll_period_us": 10000, 00:22:12.010 "nvme_ioq_poll_period_us": 0, 00:22:12.010 "io_queue_requests": 0, 00:22:12.010 "delay_cmd_submit": true, 00:22:12.010 "transport_retry_count": 4, 00:22:12.010 "bdev_retry_count": 3, 00:22:12.010 "transport_ack_timeout": 0, 00:22:12.010 "ctrlr_loss_timeout_sec": 0, 00:22:12.010 "reconnect_delay_sec": 0, 00:22:12.010 "fast_io_fail_timeout_sec": 0, 00:22:12.010 "disable_auto_failback": false, 00:22:12.010 "generate_uuids": false, 00:22:12.010 "transport_tos": 0, 00:22:12.010 "nvme_error_stat": false, 00:22:12.010 "rdma_srq_size": 0, 00:22:12.010 "io_path_stat": false, 00:22:12.010 "allow_accel_sequence": false, 00:22:12.010 "rdma_max_cq_size": 0, 00:22:12.010 "rdma_cm_event_timeout_ms": 0, 00:22:12.010 "dhchap_digests": [ 00:22:12.010 "sha256", 00:22:12.010 "sha384", 00:22:12.010 "sha512" 00:22:12.010 ], 00:22:12.010 "dhchap_dhgroups": [ 00:22:12.010 "null", 00:22:12.010 "ffdhe2048", 00:22:12.010 "ffdhe3072", 00:22:12.010 "ffdhe4096", 00:22:12.010 "ffdhe6144", 00:22:12.010 "ffdhe8192" 00:22:12.010 ] 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_nvme_set_hotplug", 00:22:12.010 "params": { 00:22:12.010 "period_us": 100000, 00:22:12.010 "enable": false 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_malloc_create", 00:22:12.010 "params": { 00:22:12.010 "name": "malloc0", 00:22:12.010 "num_blocks": 8192, 00:22:12.010 "block_size": 4096, 00:22:12.010 "physical_block_size": 4096, 00:22:12.010 "uuid": "85026785-ceef-42be-8b54-ead753f7a65d", 00:22:12.010 "optimal_io_boundary": 0 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "method": "bdev_wait_for_examine" 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "nbd", 00:22:12.010 "config": [] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "scheduler", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.010 "method": "framework_set_scheduler", 00:22:12.010 "params": { 00:22:12.010 "name": "static" 00:22:12.010 } 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "subsystem": "nvmf", 00:22:12.010 "config": [ 00:22:12.010 { 00:22:12.011 "method": "nvmf_set_config", 00:22:12.011 "params": { 00:22:12.011 "discovery_filter": "match_any", 00:22:12.011 "admin_cmd_passthru": { 00:22:12.011 "identify_ctrlr": false 00:22:12.011 } 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_set_max_subsystems", 00:22:12.011 "params": { 00:22:12.011 "max_subsystems": 1024 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_set_crdt", 00:22:12.011 "params": { 00:22:12.011 "crdt1": 0, 00:22:12.011 "crdt2": 0, 00:22:12.011 "crdt3": 0 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_create_transport", 00:22:12.011 "params": { 00:22:12.011 "trtype": "TCP", 00:22:12.011 "max_queue_depth": 128, 00:22:12.011 "max_io_qpairs_per_ctrlr": 127, 00:22:12.011 "in_capsule_data_size": 4096, 00:22:12.011 "max_io_size": 131072, 00:22:12.011 "io_unit_size": 131072, 00:22:12.011 "max_aq_depth": 128, 00:22:12.011 "num_shared_buffers": 511, 00:22:12.011 "buf_cache_size": 4294967295, 00:22:12.011 "dif_insert_or_strip": false, 00:22:12.011 "zcopy": false, 00:22:12.011 "c2h_success": false, 00:22:12.011 "sock_priority": 0, 00:22:12.011 "abort_timeout_sec": 1, 00:22:12.011 "ack_timeout": 0, 00:22:12.011 "data_wr_pool_size": 0 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_create_subsystem", 00:22:12.011 "params": { 00:22:12.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.011 "allow_any_host": false, 00:22:12.011 "serial_number": "00000000000000000000", 00:22:12.011 "model_number": "SPDK bdev Controller", 00:22:12.011 "max_namespaces": 32, 00:22:12.011 "min_cntlid": 1, 00:22:12.011 "max_cntlid": 65519, 00:22:12.011 "ana_reporting": false 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_subsystem_add_host", 00:22:12.011 "params": { 00:22:12.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.011 "host": "nqn.2016-06.io.spdk:host1", 00:22:12.011 "psk": "key0" 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_subsystem_add_ns", 00:22:12.011 "params": { 00:22:12.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.011 "namespace": { 00:22:12.011 "nsid": 1, 00:22:12.011 "bdev_name": "malloc0", 00:22:12.011 "nguid": "85026785CEEF42BE8B54EAD753F7A65D", 00:22:12.011 "uuid": "85026785-ceef-42be-8b54-ead753f7a65d", 00:22:12.011 "no_auto_visible": false 00:22:12.011 } 00:22:12.011 } 00:22:12.011 }, 00:22:12.011 { 00:22:12.011 "method": "nvmf_subsystem_add_listener", 00:22:12.011 "params": { 00:22:12.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.011 "listen_address": { 00:22:12.011 "trtype": "TCP", 00:22:12.011 "adrfam": "IPv4", 00:22:12.011 "traddr": "10.0.0.2", 00:22:12.011 "trsvcid": "4420" 00:22:12.011 }, 00:22:12.011 "secure_channel": true 00:22:12.011 } 00:22:12.011 } 00:22:12.011 ] 00:22:12.011 } 00:22:12.011 ] 00:22:12.011 }' 00:22:12.011 14:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:12.271 14:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:12.271 "subsystems": [ 00:22:12.271 { 00:22:12.271 "subsystem": "keyring", 00:22:12.271 "config": [ 00:22:12.271 { 00:22:12.271 "method": "keyring_file_add_key", 00:22:12.271 "params": { 00:22:12.271 "name": "key0", 00:22:12.271 "path": "/tmp/tmp.tYucyz5m3v" 00:22:12.271 } 00:22:12.271 } 00:22:12.271 ] 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "subsystem": "iobuf", 00:22:12.271 "config": [ 00:22:12.271 { 00:22:12.271 "method": "iobuf_set_options", 00:22:12.271 "params": { 00:22:12.271 "small_pool_count": 8192, 00:22:12.271 "large_pool_count": 1024, 00:22:12.271 "small_bufsize": 8192, 00:22:12.271 "large_bufsize": 135168 00:22:12.271 } 00:22:12.271 } 00:22:12.271 ] 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "subsystem": "sock", 00:22:12.271 "config": [ 00:22:12.271 { 00:22:12.271 "method": "sock_set_default_impl", 00:22:12.271 "params": { 00:22:12.271 "impl_name": "posix" 00:22:12.271 } 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "method": "sock_impl_set_options", 00:22:12.271 "params": { 00:22:12.271 "impl_name": "ssl", 00:22:12.271 "recv_buf_size": 4096, 00:22:12.271 "send_buf_size": 4096, 00:22:12.271 "enable_recv_pipe": true, 00:22:12.271 "enable_quickack": false, 00:22:12.271 "enable_placement_id": 0, 00:22:12.271 "enable_zerocopy_send_server": true, 00:22:12.271 "enable_zerocopy_send_client": false, 00:22:12.271 "zerocopy_threshold": 0, 00:22:12.271 "tls_version": 0, 00:22:12.271 "enable_ktls": false 00:22:12.271 } 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "method": "sock_impl_set_options", 00:22:12.271 "params": { 00:22:12.271 "impl_name": "posix", 00:22:12.271 "recv_buf_size": 2097152, 00:22:12.271 "send_buf_size": 2097152, 00:22:12.271 "enable_recv_pipe": true, 00:22:12.271 "enable_quickack": false, 00:22:12.271 "enable_placement_id": 0, 00:22:12.271 "enable_zerocopy_send_server": true, 00:22:12.271 "enable_zerocopy_send_client": false, 00:22:12.271 "zerocopy_threshold": 0, 00:22:12.271 "tls_version": 0, 00:22:12.271 "enable_ktls": false 00:22:12.271 } 00:22:12.271 } 00:22:12.271 ] 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "subsystem": "vmd", 00:22:12.271 "config": [] 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "subsystem": "accel", 00:22:12.271 "config": [ 00:22:12.271 { 00:22:12.271 "method": "accel_set_options", 00:22:12.271 "params": { 00:22:12.271 "small_cache_size": 128, 00:22:12.271 "large_cache_size": 16, 00:22:12.271 "task_count": 2048, 00:22:12.271 "sequence_count": 2048, 00:22:12.271 "buf_count": 2048 00:22:12.271 } 00:22:12.271 } 00:22:12.271 ] 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "subsystem": "bdev", 00:22:12.271 "config": [ 00:22:12.271 { 00:22:12.271 "method": "bdev_set_options", 00:22:12.271 "params": { 00:22:12.271 "bdev_io_pool_size": 65535, 00:22:12.271 "bdev_io_cache_size": 256, 00:22:12.271 "bdev_auto_examine": true, 00:22:12.271 "iobuf_small_cache_size": 128, 00:22:12.271 "iobuf_large_cache_size": 16 00:22:12.271 } 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "method": "bdev_raid_set_options", 00:22:12.271 "params": { 00:22:12.271 "process_window_size_kb": 1024 00:22:12.271 } 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "method": "bdev_iscsi_set_options", 00:22:12.271 "params": { 00:22:12.271 "timeout_sec": 30 00:22:12.271 } 00:22:12.271 }, 00:22:12.271 { 00:22:12.271 "method": "bdev_nvme_set_options", 00:22:12.271 "params": { 00:22:12.271 "action_on_timeout": "none", 00:22:12.271 "timeout_us": 0, 00:22:12.271 "timeout_admin_us": 0, 00:22:12.271 "keep_alive_timeout_ms": 10000, 00:22:12.271 "arbitration_burst": 0, 00:22:12.271 "low_priority_weight": 0, 00:22:12.271 "medium_priority_weight": 0, 00:22:12.271 "high_priority_weight": 0, 00:22:12.271 "nvme_adminq_poll_period_us": 10000, 00:22:12.271 "nvme_ioq_poll_period_us": 0, 00:22:12.271 "io_queue_requests": 512, 00:22:12.271 "delay_cmd_submit": true, 00:22:12.271 "transport_retry_count": 4, 00:22:12.271 "bdev_retry_count": 3, 00:22:12.271 "transport_ack_timeout": 0, 00:22:12.271 "ctrlr_loss_timeout_sec": 0, 00:22:12.271 "reconnect_delay_sec": 0, 00:22:12.271 "fast_io_fail_timeout_sec": 0, 00:22:12.271 "disable_auto_failback": false, 00:22:12.272 "generate_uuids": false, 00:22:12.272 "transport_tos": 0, 00:22:12.272 "nvme_error_stat": false, 00:22:12.272 "rdma_srq_size": 0, 00:22:12.272 "io_path_stat": false, 00:22:12.272 "allow_accel_sequence": false, 00:22:12.272 "rdma_max_cq_size": 0, 00:22:12.272 "rdma_cm_event_timeout_ms": 0, 00:22:12.272 "dhchap_digests": [ 00:22:12.272 "sha256", 00:22:12.272 "sha384", 00:22:12.272 "sha512" 00:22:12.272 ], 00:22:12.272 "dhchap_dhgroups": [ 00:22:12.272 "null", 00:22:12.272 "ffdhe2048", 00:22:12.272 "ffdhe3072", 00:22:12.272 "ffdhe4096", 00:22:12.272 "ffdhe6144", 00:22:12.272 "ffdhe8192" 00:22:12.272 ] 00:22:12.272 } 00:22:12.272 }, 00:22:12.272 { 00:22:12.272 "method": "bdev_nvme_attach_controller", 00:22:12.272 "params": { 00:22:12.272 "name": "nvme0", 00:22:12.272 "trtype": "TCP", 00:22:12.272 "adrfam": "IPv4", 00:22:12.272 "traddr": "10.0.0.2", 00:22:12.272 "trsvcid": "4420", 00:22:12.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.272 "prchk_reftag": false, 00:22:12.272 "prchk_guard": false, 00:22:12.272 "ctrlr_loss_timeout_sec": 0, 00:22:12.272 "reconnect_delay_sec": 0, 00:22:12.272 "fast_io_fail_timeout_sec": 0, 00:22:12.272 "psk": "key0", 00:22:12.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.272 "hdgst": false, 00:22:12.272 "ddgst": false 00:22:12.272 } 00:22:12.272 }, 00:22:12.272 { 00:22:12.272 "method": "bdev_nvme_set_hotplug", 00:22:12.272 "params": { 00:22:12.272 "period_us": 100000, 00:22:12.272 "enable": false 00:22:12.272 } 00:22:12.272 }, 00:22:12.272 { 00:22:12.272 "method": "bdev_enable_histogram", 00:22:12.272 "params": { 00:22:12.272 "name": "nvme0n1", 00:22:12.272 "enable": true 00:22:12.272 } 00:22:12.272 }, 00:22:12.272 { 00:22:12.272 "method": "bdev_wait_for_examine" 00:22:12.272 } 00:22:12.272 ] 00:22:12.272 }, 00:22:12.272 { 00:22:12.272 "subsystem": "nbd", 00:22:12.272 "config": [] 00:22:12.272 } 00:22:12.272 ] 00:22:12.272 }' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1417222 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1417222 ']' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1417222 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417222 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417222' 00:22:12.272 killing process with pid 1417222 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1417222 00:22:12.272 Received shutdown signal, test time was about 1.000000 seconds 00:22:12.272 00:22:12.272 Latency(us) 00:22:12.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.272 =================================================================================================================== 00:22:12.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1417222 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1417088 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1417088 ']' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1417088 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.272 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417088 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417088' 00:22:12.532 killing process with pid 1417088 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1417088 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1417088 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.532 14:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:12.532 "subsystems": [ 00:22:12.532 { 00:22:12.532 "subsystem": "keyring", 00:22:12.532 "config": [ 00:22:12.532 { 00:22:12.532 "method": "keyring_file_add_key", 00:22:12.532 "params": { 00:22:12.532 "name": "key0", 00:22:12.532 "path": "/tmp/tmp.tYucyz5m3v" 00:22:12.532 } 00:22:12.532 } 00:22:12.532 ] 00:22:12.532 }, 00:22:12.532 { 00:22:12.532 "subsystem": "iobuf", 00:22:12.532 "config": [ 00:22:12.532 { 00:22:12.532 "method": "iobuf_set_options", 00:22:12.532 "params": { 00:22:12.532 "small_pool_count": 8192, 00:22:12.532 "large_pool_count": 1024, 00:22:12.532 "small_bufsize": 8192, 00:22:12.532 "large_bufsize": 135168 00:22:12.532 } 00:22:12.532 } 00:22:12.532 ] 00:22:12.532 }, 00:22:12.532 { 00:22:12.532 "subsystem": "sock", 00:22:12.532 "config": [ 00:22:12.532 { 00:22:12.532 "method": "sock_set_default_impl", 00:22:12.532 "params": { 00:22:12.532 "impl_name": "posix" 00:22:12.532 } 00:22:12.532 }, 00:22:12.532 { 00:22:12.532 "method": "sock_impl_set_options", 00:22:12.532 "params": { 00:22:12.532 "impl_name": "ssl", 00:22:12.532 "recv_buf_size": 4096, 00:22:12.532 "send_buf_size": 4096, 00:22:12.532 "enable_recv_pipe": true, 00:22:12.533 "enable_quickack": false, 00:22:12.533 "enable_placement_id": 0, 00:22:12.533 "enable_zerocopy_send_server": true, 00:22:12.533 "enable_zerocopy_send_client": false, 00:22:12.533 "zerocopy_threshold": 0, 00:22:12.533 "tls_version": 0, 00:22:12.533 "enable_ktls": false 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "sock_impl_set_options", 00:22:12.533 "params": { 00:22:12.533 "impl_name": "posix", 00:22:12.533 "recv_buf_size": 2097152, 00:22:12.533 "send_buf_size": 2097152, 00:22:12.533 "enable_recv_pipe": true, 00:22:12.533 "enable_quickack": false, 00:22:12.533 "enable_placement_id": 0, 00:22:12.533 "enable_zerocopy_send_server": true, 00:22:12.533 "enable_zerocopy_send_client": false, 00:22:12.533 "zerocopy_threshold": 0, 00:22:12.533 "tls_version": 0, 00:22:12.533 "enable_ktls": false 00:22:12.533 } 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "vmd", 00:22:12.533 "config": [] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "accel", 00:22:12.533 "config": [ 00:22:12.533 { 00:22:12.533 "method": "accel_set_options", 00:22:12.533 "params": { 00:22:12.533 "small_cache_size": 128, 00:22:12.533 "large_cache_size": 16, 00:22:12.533 "task_count": 2048, 00:22:12.533 "sequence_count": 2048, 00:22:12.533 "buf_count": 2048 00:22:12.533 } 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "bdev", 00:22:12.533 "config": [ 00:22:12.533 { 00:22:12.533 "method": "bdev_set_options", 00:22:12.533 "params": { 00:22:12.533 "bdev_io_pool_size": 65535, 00:22:12.533 "bdev_io_cache_size": 256, 00:22:12.533 "bdev_auto_examine": true, 00:22:12.533 "iobuf_small_cache_size": 128, 00:22:12.533 "iobuf_large_cache_size": 16 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_raid_set_options", 00:22:12.533 "params": { 00:22:12.533 "process_window_size_kb": 1024 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_iscsi_set_options", 00:22:12.533 "params": { 00:22:12.533 "timeout_sec": 30 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_nvme_set_options", 00:22:12.533 "params": { 00:22:12.533 "action_on_timeout": "none", 00:22:12.533 "timeout_us": 0, 00:22:12.533 "timeout_admin_us": 0, 00:22:12.533 "keep_alive_timeout_ms": 10000, 00:22:12.533 "arbitration_burst": 0, 00:22:12.533 "low_priority_weight": 0, 00:22:12.533 "medium_priority_weight": 0, 00:22:12.533 "high_priority_weight": 0, 00:22:12.533 "nvme_adminq_poll_period_us": 10000, 00:22:12.533 "nvme_ioq_poll_period_us": 0, 00:22:12.533 "io_queue_requests": 0, 00:22:12.533 "delay_cmd_submit": true, 00:22:12.533 "transport_retry_count": 4, 00:22:12.533 "bdev_retry_count": 3, 00:22:12.533 "transport_ack_timeout": 0, 00:22:12.533 "ctrlr_loss_timeout_sec": 0, 00:22:12.533 "reconnect_delay_sec": 0, 00:22:12.533 "fast_io_fail_timeout_sec": 0, 00:22:12.533 "disable_auto_failback": false, 00:22:12.533 "generate_uuids": false, 00:22:12.533 "transport_tos": 0, 00:22:12.533 "nvme_error_stat": false, 00:22:12.533 "rdma_srq_size": 0, 00:22:12.533 "io_path_stat": false, 00:22:12.533 "allow_accel_sequence": false, 00:22:12.533 "rdma_max_cq_size": 0, 00:22:12.533 "rdma_cm_event_timeout_ms": 0, 00:22:12.533 "dhchap_digests": [ 00:22:12.533 "sha256", 00:22:12.533 "sha384", 00:22:12.533 "sha512" 00:22:12.533 ], 00:22:12.533 "dhchap_dhgroups": [ 00:22:12.533 "null", 00:22:12.533 "ffdhe2048", 00:22:12.533 "ffdhe3072", 00:22:12.533 "ffdhe4096", 00:22:12.533 "ffdhe6144", 00:22:12.533 "ffdhe8192" 00:22:12.533 ] 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_nvme_set_hotplug", 00:22:12.533 "params": { 00:22:12.533 "period_us": 100000, 00:22:12.533 "enable": false 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_malloc_create", 00:22:12.533 "params": { 00:22:12.533 "name": "malloc0", 00:22:12.533 "num_blocks": 8192, 00:22:12.533 "block_size": 4096, 00:22:12.533 "physical_block_size": 4096, 00:22:12.533 "uuid": "85026785-ceef-42be-8b54-ead753f7a65d", 00:22:12.533 "optimal_io_boundary": 0 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "bdev_wait_for_examine" 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "nbd", 00:22:12.533 "config": [] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "scheduler", 00:22:12.533 "config": [ 00:22:12.533 { 00:22:12.533 "method": "framework_set_scheduler", 00:22:12.533 "params": { 00:22:12.533 "name": "static" 00:22:12.533 } 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "subsystem": "nvmf", 00:22:12.533 "config": [ 00:22:12.533 { 00:22:12.533 "method": "nvmf_set_config", 00:22:12.533 "params": { 00:22:12.533 "discovery_filter": "match_any", 00:22:12.533 "admin_cmd_passthru": { 00:22:12.533 "identify_ctrlr": false 00:22:12.533 } 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_set_max_subsystems", 00:22:12.533 "params": { 00:22:12.533 "max_subsystems": 1024 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_set_crdt", 00:22:12.533 "params": { 00:22:12.533 "crdt1": 0, 00:22:12.533 "crdt2": 0, 00:22:12.533 "crdt3": 0 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_create_transport", 00:22:12.533 "params": { 00:22:12.533 "trtype": "TCP", 00:22:12.533 "max_queue_depth": 128, 00:22:12.533 "max_io_qpairs_per_ctrlr": 127, 00:22:12.533 "in_capsule_data_size": 4096, 00:22:12.533 "max_io_size": 131072, 00:22:12.533 "io_unit_size": 131072, 00:22:12.533 "max_aq_depth": 128, 00:22:12.533 "num_shared_buffers": 511, 00:22:12.533 "buf_cache_size": 4294967295, 00:22:12.533 "dif_insert_or_strip": false, 00:22:12.533 "zcopy": false, 00:22:12.533 "c2h_success": false, 00:22:12.533 "sock_priority": 0, 00:22:12.533 "abort_timeout_sec": 1, 00:22:12.533 "ack_timeout": 0, 00:22:12.533 "data_wr_pool_size": 0 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_create_subsystem", 00:22:12.533 "params": { 00:22:12.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.533 "allow_any_host": false, 00:22:12.533 "serial_number": "00000000000000000000", 00:22:12.533 "model_number": "SPDK bdev Controller", 00:22:12.533 "max_namespaces": 32, 00:22:12.533 "min_cntlid": 1, 00:22:12.533 "max_cntlid": 65519, 00:22:12.533 "ana_reporting": false 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_subsystem_add_host", 00:22:12.533 "params": { 00:22:12.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.533 "host": "nqn.2016-06.io.spdk:host1", 00:22:12.533 "psk": "key0" 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_subsystem_add_ns", 00:22:12.533 "params": { 00:22:12.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.533 "namespace": { 00:22:12.533 "nsid": 1, 00:22:12.533 "bdev_name": "malloc0", 00:22:12.533 "nguid": "85026785CEEF42BE8B54EAD753F7A65D", 00:22:12.533 "uuid": "85026785-ceef-42be-8b54-ead753f7a65d", 00:22:12.533 "no_auto_visible": false 00:22:12.533 } 00:22:12.533 } 00:22:12.533 }, 00:22:12.533 { 00:22:12.533 "method": "nvmf_subsystem_add_listener", 00:22:12.533 "params": { 00:22:12.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.533 "listen_address": { 00:22:12.533 "trtype": "TCP", 00:22:12.533 "adrfam": "IPv4", 00:22:12.533 "traddr": "10.0.0.2", 00:22:12.533 "trsvcid": "4420" 00:22:12.533 }, 00:22:12.533 "secure_channel": true 00:22:12.533 } 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 } 00:22:12.533 ] 00:22:12.533 }' 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1417905 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1417905 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1417905 ']' 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.533 14:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.533 [2024-07-15 14:08:10.632606] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:12.533 [2024-07-15 14:08:10.632660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.793 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.794 [2024-07-15 14:08:10.705331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.794 [2024-07-15 14:08:10.766589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.794 [2024-07-15 14:08:10.766630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.794 [2024-07-15 14:08:10.766638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.794 [2024-07-15 14:08:10.766645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.794 [2024-07-15 14:08:10.766650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.794 [2024-07-15 14:08:10.766707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.054 [2024-07-15 14:08:10.963789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.054 [2024-07-15 14:08:10.995795] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.054 [2024-07-15 14:08:11.009048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.314 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.314 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:13.314 14:08:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.314 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.314 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1418017 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1418017 /var/tmp/bdevperf.sock 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1418017 ']' 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.575 14:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:13.575 "subsystems": [ 00:22:13.575 { 00:22:13.575 "subsystem": "keyring", 00:22:13.575 "config": [ 00:22:13.575 { 00:22:13.575 "method": "keyring_file_add_key", 00:22:13.575 "params": { 00:22:13.575 "name": "key0", 00:22:13.575 "path": "/tmp/tmp.tYucyz5m3v" 00:22:13.575 } 00:22:13.575 } 00:22:13.575 ] 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "subsystem": "iobuf", 00:22:13.575 "config": [ 00:22:13.575 { 00:22:13.575 "method": "iobuf_set_options", 00:22:13.575 "params": { 00:22:13.575 "small_pool_count": 8192, 00:22:13.575 "large_pool_count": 1024, 00:22:13.575 "small_bufsize": 8192, 00:22:13.575 "large_bufsize": 135168 00:22:13.575 } 00:22:13.575 } 00:22:13.575 ] 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "subsystem": "sock", 00:22:13.575 "config": [ 00:22:13.575 { 00:22:13.575 "method": "sock_set_default_impl", 00:22:13.575 "params": { 00:22:13.575 "impl_name": "posix" 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "sock_impl_set_options", 00:22:13.575 "params": { 00:22:13.575 "impl_name": "ssl", 00:22:13.575 "recv_buf_size": 4096, 00:22:13.575 "send_buf_size": 4096, 00:22:13.575 "enable_recv_pipe": true, 00:22:13.575 "enable_quickack": false, 00:22:13.575 "enable_placement_id": 0, 00:22:13.575 "enable_zerocopy_send_server": true, 00:22:13.575 "enable_zerocopy_send_client": false, 00:22:13.575 "zerocopy_threshold": 0, 00:22:13.575 "tls_version": 0, 00:22:13.575 "enable_ktls": false 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "sock_impl_set_options", 00:22:13.575 "params": { 00:22:13.575 "impl_name": "posix", 00:22:13.575 "recv_buf_size": 2097152, 00:22:13.575 "send_buf_size": 2097152, 00:22:13.575 "enable_recv_pipe": true, 00:22:13.575 "enable_quickack": false, 00:22:13.575 "enable_placement_id": 0, 00:22:13.575 "enable_zerocopy_send_server": true, 00:22:13.575 "enable_zerocopy_send_client": false, 00:22:13.575 "zerocopy_threshold": 0, 00:22:13.575 "tls_version": 0, 00:22:13.575 "enable_ktls": false 00:22:13.575 } 00:22:13.575 } 00:22:13.575 ] 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "subsystem": "vmd", 00:22:13.575 "config": [] 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "subsystem": "accel", 00:22:13.575 "config": [ 00:22:13.575 { 00:22:13.575 "method": "accel_set_options", 00:22:13.575 "params": { 00:22:13.575 "small_cache_size": 128, 00:22:13.575 "large_cache_size": 16, 00:22:13.575 "task_count": 2048, 00:22:13.575 "sequence_count": 2048, 00:22:13.575 "buf_count": 2048 00:22:13.575 } 00:22:13.575 } 00:22:13.575 ] 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "subsystem": "bdev", 00:22:13.575 "config": [ 00:22:13.575 { 00:22:13.575 "method": "bdev_set_options", 00:22:13.575 "params": { 00:22:13.575 "bdev_io_pool_size": 65535, 00:22:13.575 "bdev_io_cache_size": 256, 00:22:13.575 "bdev_auto_examine": true, 00:22:13.575 "iobuf_small_cache_size": 128, 00:22:13.575 "iobuf_large_cache_size": 16 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "bdev_raid_set_options", 00:22:13.575 "params": { 00:22:13.575 "process_window_size_kb": 1024 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "bdev_iscsi_set_options", 00:22:13.575 "params": { 00:22:13.575 "timeout_sec": 30 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "bdev_nvme_set_options", 00:22:13.575 "params": { 00:22:13.575 "action_on_timeout": "none", 00:22:13.575 "timeout_us": 0, 00:22:13.575 "timeout_admin_us": 0, 00:22:13.575 "keep_alive_timeout_ms": 10000, 00:22:13.575 "arbitration_burst": 0, 00:22:13.575 "low_priority_weight": 0, 00:22:13.575 "medium_priority_weight": 0, 00:22:13.575 "high_priority_weight": 0, 00:22:13.575 "nvme_adminq_poll_period_us": 10000, 00:22:13.575 "nvme_ioq_poll_period_us": 0, 00:22:13.575 "io_queue_requests": 512, 00:22:13.575 "delay_cmd_submit": true, 00:22:13.575 "transport_retry_count": 4, 00:22:13.575 "bdev_retry_count": 3, 00:22:13.575 "transport_ack_timeout": 0, 00:22:13.575 "ctrlr_loss_timeout_sec": 0, 00:22:13.575 "reconnect_delay_sec": 0, 00:22:13.575 "fast_io_fail_timeout_sec": 0, 00:22:13.575 "disable_auto_failback": false, 00:22:13.575 "generate_uuids": false, 00:22:13.575 "transport_tos": 0, 00:22:13.575 "nvme_error_stat": false, 00:22:13.575 "rdma_srq_size": 0, 00:22:13.575 "io_path_stat": false, 00:22:13.575 "allow_accel_sequence": false, 00:22:13.575 "rdma_max_cq_size": 0, 00:22:13.575 "rdma_cm_event_timeout_ms": 0, 00:22:13.575 "dhchap_digests": [ 00:22:13.575 "sha256", 00:22:13.575 "sha384", 00:22:13.575 "sha512" 00:22:13.575 ], 00:22:13.575 "dhchap_dhgroups": [ 00:22:13.575 "null", 00:22:13.575 "ffdhe2048", 00:22:13.575 "ffdhe3072", 00:22:13.575 "ffdhe4096", 00:22:13.575 "ffdhe6144", 00:22:13.575 "ffdhe8192" 00:22:13.575 ] 00:22:13.575 } 00:22:13.575 }, 00:22:13.575 { 00:22:13.575 "method": "bdev_nvme_attach_controller", 00:22:13.575 "params": { 00:22:13.575 "name": "nvme0", 00:22:13.575 "trtype": "TCP", 00:22:13.575 "adrfam": "IPv4", 00:22:13.575 "traddr": "10.0.0.2", 00:22:13.575 "trsvcid": "4420", 00:22:13.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.575 "prchk_reftag": false, 00:22:13.575 "prchk_guard": false, 00:22:13.575 "ctrlr_loss_timeout_sec": 0, 00:22:13.575 "reconnect_delay_sec": 0, 00:22:13.575 "fast_io_fail_timeout_sec": 0, 00:22:13.575 "psk": "key0", 00:22:13.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.576 "hdgst": false, 00:22:13.576 "ddgst": false 00:22:13.576 } 00:22:13.576 }, 00:22:13.576 { 00:22:13.576 "method": "bdev_nvme_set_hotplug", 00:22:13.576 "params": { 00:22:13.576 "period_us": 100000, 00:22:13.576 "enable": false 00:22:13.576 } 00:22:13.576 }, 00:22:13.576 { 00:22:13.576 "method": "bdev_enable_histogram", 00:22:13.576 "params": { 00:22:13.576 "name": "nvme0n1", 00:22:13.576 "enable": true 00:22:13.576 } 00:22:13.576 }, 00:22:13.576 { 00:22:13.576 "method": "bdev_wait_for_examine" 00:22:13.576 } 00:22:13.576 ] 00:22:13.576 }, 00:22:13.576 { 00:22:13.576 "subsystem": "nbd", 00:22:13.576 "config": [] 00:22:13.576 } 00:22:13.576 ] 00:22:13.576 }' 00:22:13.576 [2024-07-15 14:08:11.489232] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:13.576 [2024-07-15 14:08:11.489285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418017 ] 00:22:13.576 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.576 [2024-07-15 14:08:11.569748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.576 [2024-07-15 14:08:11.623529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.836 [2024-07-15 14:08:11.756454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.406 14:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.406 Running I/O for 1 seconds... 00:22:15.798 00:22:15.798 Latency(us) 00:22:15.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:15.798 Verification LBA range: start 0x0 length 0x2000 00:22:15.798 nvme0n1 : 1.01 5680.83 22.19 0.00 0.00 22354.44 6417.07 38666.24 00:22:15.798 =================================================================================================================== 00:22:15.798 Total : 5680.83 22.19 0.00 0.00 22354.44 6417.07 38666.24 00:22:15.798 0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:15.798 nvmf_trace.0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1418017 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1418017 ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1418017 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1418017 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1418017' 00:22:15.798 killing process with pid 1418017 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1418017 00:22:15.798 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.798 00:22:15.798 Latency(us) 00:22:15.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.798 =================================================================================================================== 00:22:15.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1418017 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.798 rmmod nvme_tcp 00:22:15.798 rmmod nvme_fabrics 00:22:15.798 rmmod nvme_keyring 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1417905 ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1417905 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1417905 ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1417905 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.798 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417905 00:22:16.058 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.058 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.058 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417905' 00:22:16.058 killing process with pid 1417905 00:22:16.058 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1417905 00:22:16.058 14:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1417905 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.058 14:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.017 14:08:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.017 14:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RBPbb565AO /tmp/tmp.OJaqmqz6YB /tmp/tmp.tYucyz5m3v 00:22:18.017 00:22:18.017 real 1m23.950s 00:22:18.017 user 2m8.689s 00:22:18.017 sys 0m26.377s 00:22:18.017 14:08:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.017 14:08:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.017 ************************************ 00:22:18.017 END TEST nvmf_tls 00:22:18.017 ************************************ 00:22:18.277 14:08:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:18.277 14:08:16 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:18.277 14:08:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:18.277 14:08:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.277 14:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.277 ************************************ 00:22:18.277 START TEST nvmf_fips 00:22:18.277 ************************************ 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:18.277 * Looking for test storage... 00:22:18.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.277 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:18.278 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:18.538 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:18.538 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:18.538 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:18.539 Error setting digest 00:22:18.539 0022527E767F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:18.539 0022527E767F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.539 14:08:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:26.681 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.681 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:26.682 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:26.682 Found net devices under 0000:31:00.0: cvl_0_0 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:26.682 Found net devices under 0000:31:00.1: cvl_0_1 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:22:26.682 00:22:26.682 --- 10.0.0.2 ping statistics --- 00:22:26.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.682 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:22:26.682 00:22:26.682 --- 10.0.0.1 ping statistics --- 00:22:26.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.682 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1423330 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1423330 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1423330 ']' 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.682 14:08:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.943 [2024-07-15 14:08:24.856169] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:26.943 [2024-07-15 14:08:24.856242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.943 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.943 [2024-07-15 14:08:24.951668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.943 [2024-07-15 14:08:25.043398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.943 [2024-07-15 14:08:25.043458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.943 [2024-07-15 14:08:25.043465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.943 [2024-07-15 14:08:25.043472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.943 [2024-07-15 14:08:25.043478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.943 [2024-07-15 14:08:25.043502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.515 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.515 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:27.515 14:08:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.515 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:27.515 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.777 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.777 [2024-07-15 14:08:25.818313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.777 [2024-07-15 14:08:25.834298] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.777 [2024-07-15 14:08:25.834568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.777 [2024-07-15 14:08:25.864538] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:27.777 malloc0 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1423529 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1423529 /var/tmp/bdevperf.sock 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1423529 ']' 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.038 14:08:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.038 [2024-07-15 14:08:25.956564] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:28.038 [2024-07-15 14:08:25.956635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423529 ] 00:22:28.038 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.038 [2024-07-15 14:08:26.018212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.038 [2024-07-15 14:08:26.082739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.609 14:08:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.609 14:08:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:28.610 14:08:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.870 [2024-07-15 14:08:26.857986] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.870 [2024-07-15 14:08:26.858043] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:28.870 TLSTESTn1 00:22:28.870 14:08:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.131 Running I/O for 10 seconds... 00:22:39.122 00:22:39.122 Latency(us) 00:22:39.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:39.122 Verification LBA range: start 0x0 length 0x2000 00:22:39.122 TLSTESTn1 : 10.01 5914.89 23.11 0.00 0.00 21610.62 5543.25 91750.40 00:22:39.122 =================================================================================================================== 00:22:39.122 Total : 5914.89 23.11 0.00 0.00 21610.62 5543.25 91750.40 00:22:39.122 0 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:39.122 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:39.123 nvmf_trace.0 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1423529 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1423529 ']' 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1423529 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.123 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423529 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423529' 00:22:39.383 killing process with pid 1423529 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1423529 00:22:39.383 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.383 00:22:39.383 Latency(us) 00:22:39.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.383 =================================================================================================================== 00:22:39.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.383 [2024-07-15 14:08:37.242397] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1423529 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.383 rmmod nvme_tcp 00:22:39.383 rmmod nvme_fabrics 00:22:39.383 rmmod nvme_keyring 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1423330 ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1423330 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1423330 ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1423330 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423330 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423330' 00:22:39.383 killing process with pid 1423330 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1423330 00:22:39.383 [2024-07-15 14:08:37.469678] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.383 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1423330 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.642 14:08:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.551 14:08:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.551 14:08:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.811 00:22:41.811 real 0m23.465s 00:22:41.811 user 0m24.225s 00:22:41.811 sys 0m9.906s 00:22:41.811 14:08:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.811 14:08:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.811 ************************************ 00:22:41.811 END TEST nvmf_fips 00:22:41.811 ************************************ 00:22:41.811 14:08:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.811 14:08:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:41.811 14:08:39 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:41.811 14:08:39 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:41.811 14:08:39 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:41.811 14:08:39 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.811 14:08:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:49.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:49.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:49.947 Found net devices under 0000:31:00.0: cvl_0_0 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:49.947 Found net devices under 0000:31:00.1: cvl_0_1 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:49.947 14:08:46 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:49.947 14:08:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.947 14:08:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.947 14:08:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.947 ************************************ 00:22:49.947 START TEST nvmf_perf_adq 00:22:49.947 ************************************ 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:49.947 * Looking for test storage... 00:22:49.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:49.947 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.948 14:08:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:58.083 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:58.083 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:58.083 Found net devices under 0000:31:00.0: cvl_0_0 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.083 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:58.084 Found net devices under 0000:31:00.1: cvl_0_1 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:58.084 14:08:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:58.343 14:08:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:00.331 14:08:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:05.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:05.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:05.612 Found net devices under 0000:31:00.0: cvl_0_0 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:05.612 Found net devices under 0000:31:00.1: cvl_0_1 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.612 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:23:05.613 00:23:05.613 --- 10.0.0.2 ping statistics --- 00:23:05.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.613 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:23:05.613 00:23:05.613 --- 10.0.0.1 ping statistics --- 00:23:05.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.613 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1436386 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1436386 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1436386 ']' 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.613 14:09:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.613 [2024-07-15 14:09:03.566227] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:05.613 [2024-07-15 14:09:03.566289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.613 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.613 [2024-07-15 14:09:03.648728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.613 [2024-07-15 14:09:03.724446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.613 [2024-07-15 14:09:03.724487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.613 [2024-07-15 14:09:03.724495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.613 [2024-07-15 14:09:03.724501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.613 [2024-07-15 14:09:03.724507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.613 [2024-07-15 14:09:03.724721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.613 [2024-07-15 14:09:03.724854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.613 [2024-07-15 14:09:03.724911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.613 [2024-07-15 14:09:03.724911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 [2024-07-15 14:09:04.535774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 Malloc1 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.555 [2024-07-15 14:09:04.595417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1436690 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:06.555 14:09:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:06.555 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:09.096 "tick_rate": 2400000000, 00:23:09.096 "poll_groups": [ 00:23:09.096 { 00:23:09.096 "name": "nvmf_tgt_poll_group_000", 00:23:09.096 "admin_qpairs": 1, 00:23:09.096 "io_qpairs": 1, 00:23:09.096 "current_admin_qpairs": 1, 00:23:09.096 "current_io_qpairs": 1, 00:23:09.096 "pending_bdev_io": 0, 00:23:09.096 "completed_nvme_io": 20807, 00:23:09.096 "transports": [ 00:23:09.096 { 00:23:09.096 "trtype": "TCP" 00:23:09.096 } 00:23:09.096 ] 00:23:09.096 }, 00:23:09.096 { 00:23:09.096 "name": "nvmf_tgt_poll_group_001", 00:23:09.096 "admin_qpairs": 0, 00:23:09.096 "io_qpairs": 1, 00:23:09.096 "current_admin_qpairs": 0, 00:23:09.096 "current_io_qpairs": 1, 00:23:09.096 "pending_bdev_io": 0, 00:23:09.096 "completed_nvme_io": 29451, 00:23:09.096 "transports": [ 00:23:09.096 { 00:23:09.096 "trtype": "TCP" 00:23:09.096 } 00:23:09.096 ] 00:23:09.096 }, 00:23:09.096 { 00:23:09.096 "name": "nvmf_tgt_poll_group_002", 00:23:09.096 "admin_qpairs": 0, 00:23:09.096 "io_qpairs": 1, 00:23:09.096 "current_admin_qpairs": 0, 00:23:09.096 "current_io_qpairs": 1, 00:23:09.096 "pending_bdev_io": 0, 00:23:09.096 "completed_nvme_io": 23930, 00:23:09.096 "transports": [ 00:23:09.096 { 00:23:09.096 "trtype": "TCP" 00:23:09.096 } 00:23:09.096 ] 00:23:09.096 }, 00:23:09.096 { 00:23:09.096 "name": "nvmf_tgt_poll_group_003", 00:23:09.096 "admin_qpairs": 0, 00:23:09.096 "io_qpairs": 1, 00:23:09.096 "current_admin_qpairs": 0, 00:23:09.096 "current_io_qpairs": 1, 00:23:09.096 "pending_bdev_io": 0, 00:23:09.096 "completed_nvme_io": 21807, 00:23:09.096 "transports": [ 00:23:09.096 { 00:23:09.096 "trtype": "TCP" 00:23:09.096 } 00:23:09.096 ] 00:23:09.096 } 00:23:09.096 ] 00:23:09.096 }' 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:09.096 14:09:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1436690 00:23:17.233 Initializing NVMe Controllers 00:23:17.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:17.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:17.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:17.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:17.233 Initialization complete. Launching workers. 00:23:17.233 ======================================================== 00:23:17.233 Latency(us) 00:23:17.233 Device Information : IOPS MiB/s Average min max 00:23:17.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11735.70 45.84 5453.37 1496.10 8877.17 00:23:17.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15046.40 58.77 4253.31 1271.42 9717.79 00:23:17.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15051.70 58.80 4252.69 1484.51 11242.09 00:23:17.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13757.20 53.74 4652.61 1330.99 11325.39 00:23:17.233 ======================================================== 00:23:17.234 Total : 55591.00 217.15 4605.30 1271.42 11325.39 00:23:17.234 00:23:17.234 [2024-07-15 14:09:14.712688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc01a0 is same with the state(5) to be set 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.234 rmmod nvme_tcp 00:23:17.234 rmmod nvme_fabrics 00:23:17.234 rmmod nvme_keyring 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1436386 ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1436386 ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436386' 00:23:17.234 killing process with pid 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1436386 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.234 14:09:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.149 14:09:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.149 14:09:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:19.149 14:09:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:21.060 14:09:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:22.446 14:09:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:27.734 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:27.734 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:27.734 Found net devices under 0000:31:00.0: cvl_0_0 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:27.734 Found net devices under 0000:31:00.1: cvl_0_1 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.734 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:23:27.735 00:23:27.735 --- 10.0.0.2 ping statistics --- 00:23:27.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.735 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:27.735 00:23:27.735 --- 10.0.0.1 ping statistics --- 00:23:27.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.735 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:27.735 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:27.995 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:27.995 net.core.busy_poll = 1 00:23:27.995 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:27.995 net.core.busy_read = 1 00:23:27.995 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:27.995 14:09:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1441644 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1441644 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1441644 ']' 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.995 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.256 [2024-07-15 14:09:26.169156] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:28.256 [2024-07-15 14:09:26.169250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.256 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.256 [2024-07-15 14:09:26.251241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.256 [2024-07-15 14:09:26.326280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.256 [2024-07-15 14:09:26.326316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.256 [2024-07-15 14:09:26.326324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.256 [2024-07-15 14:09:26.326330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.256 [2024-07-15 14:09:26.326336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.256 [2024-07-15 14:09:26.326469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.256 [2024-07-15 14:09:26.326585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.256 [2024-07-15 14:09:26.326741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.256 [2024-07-15 14:09:26.326741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.827 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.827 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:28.827 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.827 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.827 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 14:09:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 [2024-07-15 14:09:27.116749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 Malloc1 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:29.087 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.088 [2024-07-15 14:09:27.176096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1441940 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:29.088 14:09:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:29.348 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.261 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:31.261 14:09:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.261 14:09:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.261 14:09:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.261 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:31.261 "tick_rate": 2400000000, 00:23:31.261 "poll_groups": [ 00:23:31.261 { 00:23:31.261 "name": "nvmf_tgt_poll_group_000", 00:23:31.261 "admin_qpairs": 1, 00:23:31.261 "io_qpairs": 1, 00:23:31.261 "current_admin_qpairs": 1, 00:23:31.261 "current_io_qpairs": 1, 00:23:31.261 "pending_bdev_io": 0, 00:23:31.261 "completed_nvme_io": 28918, 00:23:31.261 "transports": [ 00:23:31.261 { 00:23:31.261 "trtype": "TCP" 00:23:31.261 } 00:23:31.261 ] 00:23:31.261 }, 00:23:31.261 { 00:23:31.261 "name": "nvmf_tgt_poll_group_001", 00:23:31.261 "admin_qpairs": 0, 00:23:31.261 "io_qpairs": 3, 00:23:31.261 "current_admin_qpairs": 0, 00:23:31.261 "current_io_qpairs": 3, 00:23:31.261 "pending_bdev_io": 0, 00:23:31.261 "completed_nvme_io": 41781, 00:23:31.261 "transports": [ 00:23:31.261 { 00:23:31.261 "trtype": "TCP" 00:23:31.261 } 00:23:31.261 ] 00:23:31.261 }, 00:23:31.261 { 00:23:31.262 "name": "nvmf_tgt_poll_group_002", 00:23:31.262 "admin_qpairs": 0, 00:23:31.262 "io_qpairs": 0, 00:23:31.262 "current_admin_qpairs": 0, 00:23:31.262 "current_io_qpairs": 0, 00:23:31.262 "pending_bdev_io": 0, 00:23:31.262 "completed_nvme_io": 0, 00:23:31.262 "transports": [ 00:23:31.262 { 00:23:31.262 "trtype": "TCP" 00:23:31.262 } 00:23:31.262 ] 00:23:31.262 }, 00:23:31.262 { 00:23:31.262 "name": "nvmf_tgt_poll_group_003", 00:23:31.262 "admin_qpairs": 0, 00:23:31.262 "io_qpairs": 0, 00:23:31.262 "current_admin_qpairs": 0, 00:23:31.262 "current_io_qpairs": 0, 00:23:31.262 "pending_bdev_io": 0, 00:23:31.262 "completed_nvme_io": 0, 00:23:31.262 "transports": [ 00:23:31.262 { 00:23:31.262 "trtype": "TCP" 00:23:31.262 } 00:23:31.262 ] 00:23:31.262 } 00:23:31.262 ] 00:23:31.262 }' 00:23:31.262 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:31.262 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:31.262 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:31.262 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:31.262 14:09:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1441940 00:23:39.478 Initializing NVMe Controllers 00:23:39.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:39.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:39.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:39.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:39.478 Initialization complete. Launching workers. 00:23:39.478 ======================================================== 00:23:39.478 Latency(us) 00:23:39.478 Device Information : IOPS MiB/s Average min max 00:23:39.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 19400.55 75.78 3298.78 933.60 6323.92 00:23:39.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6134.08 23.96 10434.34 1490.40 53516.37 00:23:39.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8217.08 32.10 7788.59 1354.73 55136.49 00:23:39.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7349.78 28.71 8708.99 1123.38 53383.82 00:23:39.478 ======================================================== 00:23:39.478 Total : 41101.49 160.55 6228.77 933.60 55136.49 00:23:39.478 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.478 rmmod nvme_tcp 00:23:39.478 rmmod nvme_fabrics 00:23:39.478 rmmod nvme_keyring 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1441644 ']' 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1441644 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1441644 ']' 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1441644 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1441644 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1441644' 00:23:39.478 killing process with pid 1441644 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1441644 00:23:39.478 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1441644 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.739 14:09:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.652 14:09:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.652 14:09:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:41.652 00:23:41.652 real 0m52.666s 00:23:41.652 user 2m49.433s 00:23:41.652 sys 0m10.985s 00:23:41.652 14:09:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.652 14:09:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 ************************************ 00:23:41.652 END TEST nvmf_perf_adq 00:23:41.652 ************************************ 00:23:41.652 14:09:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:41.652 14:09:39 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:41.652 14:09:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:41.652 14:09:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.652 14:09:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 ************************************ 00:23:41.652 START TEST nvmf_shutdown 00:23:41.652 ************************************ 00:23:41.652 14:09:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:41.913 * Looking for test storage... 00:23:41.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 ************************************ 00:23:41.914 START TEST nvmf_shutdown_tc1 00:23:41.914 ************************************ 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.914 14:09:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:50.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:50.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:50.074 Found net devices under 0000:31:00.0: cvl_0_0 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:50.074 Found net devices under 0000:31:00.1: cvl_0_1 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.074 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.075 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.075 14:09:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.075 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:50.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:23:50.335 00:23:50.335 --- 10.0.0.2 ping statistics --- 00:23:50.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.335 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:50.335 00:23:50.335 --- 10.0.0.1 ping statistics --- 00:23:50.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.335 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1448659 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1448659 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1448659 ']' 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.335 14:09:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.335 [2024-07-15 14:09:48.316276] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:50.335 [2024-07-15 14:09:48.316366] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.335 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.335 [2024-07-15 14:09:48.415968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.596 [2024-07-15 14:09:48.512849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.596 [2024-07-15 14:09:48.512908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.596 [2024-07-15 14:09:48.512917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.596 [2024-07-15 14:09:48.512924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.596 [2024-07-15 14:09:48.512930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.596 [2024-07-15 14:09:48.513061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.596 [2024-07-15 14:09:48.513228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.596 [2024-07-15 14:09:48.513392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.596 [2024-07-15 14:09:48.513393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.168 [2024-07-15 14:09:49.142203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.168 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.168 Malloc1 00:23:51.168 [2024-07-15 14:09:49.245645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.168 Malloc2 00:23:51.429 Malloc3 00:23:51.429 Malloc4 00:23:51.429 Malloc5 00:23:51.429 Malloc6 00:23:51.429 Malloc7 00:23:51.429 Malloc8 00:23:51.691 Malloc9 00:23:51.691 Malloc10 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1448888 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1448888 /var/tmp/bdevperf.sock 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1448888 ']' 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.691 { 00:23:51.691 "params": { 00:23:51.691 "name": "Nvme$subsystem", 00:23:51.691 "trtype": "$TEST_TRANSPORT", 00:23:51.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.691 "adrfam": "ipv4", 00:23:51.691 "trsvcid": "$NVMF_PORT", 00:23:51.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.691 "hdgst": ${hdgst:-false}, 00:23:51.691 "ddgst": ${ddgst:-false} 00:23:51.691 }, 00:23:51.691 "method": "bdev_nvme_attach_controller" 00:23:51.691 } 00:23:51.691 EOF 00:23:51.691 )") 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.691 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.691 { 00:23:51.691 "params": { 00:23:51.691 "name": "Nvme$subsystem", 00:23:51.691 "trtype": "$TEST_TRANSPORT", 00:23:51.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.691 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 [2024-07-15 14:09:49.698816] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:51.692 [2024-07-15 14:09:49.698872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.692 { 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme$subsystem", 00:23:51.692 "trtype": "$TEST_TRANSPORT", 00:23:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "$NVMF_PORT", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.692 "hdgst": ${hdgst:-false}, 00:23:51.692 "ddgst": ${ddgst:-false} 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 } 00:23:51.692 EOF 00:23:51.692 )") 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.692 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:51.692 14:09:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme1", 00:23:51.692 "trtype": "tcp", 00:23:51.692 "traddr": "10.0.0.2", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "4420", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.692 "hdgst": false, 00:23:51.692 "ddgst": false 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 },{ 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme2", 00:23:51.692 "trtype": "tcp", 00:23:51.692 "traddr": "10.0.0.2", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "4420", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.692 "hdgst": false, 00:23:51.692 "ddgst": false 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 },{ 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme3", 00:23:51.692 "trtype": "tcp", 00:23:51.692 "traddr": "10.0.0.2", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "4420", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.692 "hdgst": false, 00:23:51.692 "ddgst": false 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 },{ 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme4", 00:23:51.692 "trtype": "tcp", 00:23:51.692 "traddr": "10.0.0.2", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "4420", 00:23:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.692 "hdgst": false, 00:23:51.692 "ddgst": false 00:23:51.692 }, 00:23:51.692 "method": "bdev_nvme_attach_controller" 00:23:51.692 },{ 00:23:51.692 "params": { 00:23:51.692 "name": "Nvme5", 00:23:51.692 "trtype": "tcp", 00:23:51.692 "traddr": "10.0.0.2", 00:23:51.692 "adrfam": "ipv4", 00:23:51.692 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 },{ 00:23:51.693 "params": { 00:23:51.693 "name": "Nvme6", 00:23:51.693 "trtype": "tcp", 00:23:51.693 "traddr": "10.0.0.2", 00:23:51.693 "adrfam": "ipv4", 00:23:51.693 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 },{ 00:23:51.693 "params": { 00:23:51.693 "name": "Nvme7", 00:23:51.693 "trtype": "tcp", 00:23:51.693 "traddr": "10.0.0.2", 00:23:51.693 "adrfam": "ipv4", 00:23:51.693 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 },{ 00:23:51.693 "params": { 00:23:51.693 "name": "Nvme8", 00:23:51.693 "trtype": "tcp", 00:23:51.693 "traddr": "10.0.0.2", 00:23:51.693 "adrfam": "ipv4", 00:23:51.693 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 },{ 00:23:51.693 "params": { 00:23:51.693 "name": "Nvme9", 00:23:51.693 "trtype": "tcp", 00:23:51.693 "traddr": "10.0.0.2", 00:23:51.693 "adrfam": "ipv4", 00:23:51.693 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 },{ 00:23:51.693 "params": { 00:23:51.693 "name": "Nvme10", 00:23:51.693 "trtype": "tcp", 00:23:51.693 "traddr": "10.0.0.2", 00:23:51.693 "adrfam": "ipv4", 00:23:51.693 "trsvcid": "4420", 00:23:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.693 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.693 "hdgst": false, 00:23:51.693 "ddgst": false 00:23:51.693 }, 00:23:51.693 "method": "bdev_nvme_attach_controller" 00:23:51.693 }' 00:23:51.693 [2024-07-15 14:09:49.765811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.954 [2024-07-15 14:09:49.831124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1448888 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:53.364 14:09:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:54.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1448888 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1448659 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.304 { 00:23:54.304 "params": { 00:23:54.304 "name": "Nvme$subsystem", 00:23:54.304 "trtype": "$TEST_TRANSPORT", 00:23:54.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.304 "adrfam": "ipv4", 00:23:54.304 "trsvcid": "$NVMF_PORT", 00:23:54.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.304 "hdgst": ${hdgst:-false}, 00:23:54.304 "ddgst": ${ddgst:-false} 00:23:54.304 }, 00:23:54.304 "method": "bdev_nvme_attach_controller" 00:23:54.304 } 00:23:54.304 EOF 00:23:54.304 )") 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.304 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.304 { 00:23:54.304 "params": { 00:23:54.304 "name": "Nvme$subsystem", 00:23:54.304 "trtype": "$TEST_TRANSPORT", 00:23:54.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.304 "adrfam": "ipv4", 00:23:54.304 "trsvcid": "$NVMF_PORT", 00:23:54.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.304 "hdgst": ${hdgst:-false}, 00:23:54.304 "ddgst": ${ddgst:-false} 00:23:54.304 }, 00:23:54.304 "method": "bdev_nvme_attach_controller" 00:23:54.304 } 00:23:54.304 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 [2024-07-15 14:09:52.120410] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:54.305 [2024-07-15 14:09:52.120466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449563 ] 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.305 { 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme$subsystem", 00:23:54.305 "trtype": "$TEST_TRANSPORT", 00:23:54.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "$NVMF_PORT", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.305 "hdgst": ${hdgst:-false}, 00:23:54.305 "ddgst": ${ddgst:-false} 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 } 00:23:54.305 EOF 00:23:54.305 )") 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:54.305 14:09:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme1", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme2", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme3", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme4", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme5", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme6", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme7", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme8", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.305 "method": "bdev_nvme_attach_controller" 00:23:54.305 },{ 00:23:54.305 "params": { 00:23:54.305 "name": "Nvme9", 00:23:54.305 "trtype": "tcp", 00:23:54.305 "traddr": "10.0.0.2", 00:23:54.305 "adrfam": "ipv4", 00:23:54.305 "trsvcid": "4420", 00:23:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:54.305 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:54.305 "hdgst": false, 00:23:54.305 "ddgst": false 00:23:54.305 }, 00:23:54.306 "method": "bdev_nvme_attach_controller" 00:23:54.306 },{ 00:23:54.306 "params": { 00:23:54.306 "name": "Nvme10", 00:23:54.306 "trtype": "tcp", 00:23:54.306 "traddr": "10.0.0.2", 00:23:54.306 "adrfam": "ipv4", 00:23:54.306 "trsvcid": "4420", 00:23:54.306 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:54.306 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:54.306 "hdgst": false, 00:23:54.306 "ddgst": false 00:23:54.306 }, 00:23:54.306 "method": "bdev_nvme_attach_controller" 00:23:54.306 }' 00:23:54.306 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.306 [2024-07-15 14:09:52.187895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.306 [2024-07-15 14:09:52.251916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.688 Running I/O for 1 seconds... 00:23:56.630 00:23:56.630 Latency(us) 00:23:56.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.630 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme1n1 : 1.11 230.52 14.41 0.00 0.00 269950.29 21408.43 244667.73 00:23:56.630 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme2n1 : 1.18 216.23 13.51 0.00 0.00 288328.75 14964.05 263891.63 00:23:56.630 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme3n1 : 1.18 217.65 13.60 0.00 0.00 281526.08 12997.97 248162.99 00:23:56.630 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme4n1 : 1.19 268.83 16.80 0.00 0.00 224440.66 20425.39 244667.73 00:23:56.630 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme5n1 : 1.19 215.60 13.48 0.00 0.00 274996.27 16274.77 277872.64 00:23:56.630 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme6n1 : 1.20 267.08 16.69 0.00 0.00 218303.66 16056.32 293601.28 00:23:56.630 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme7n1 : 1.17 227.65 14.23 0.00 0.00 240819.96 16820.91 248162.99 00:23:56.630 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme8n1 : 1.20 266.51 16.66 0.00 0.00 210730.15 19770.03 214958.08 00:23:56.630 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme9n1 : 1.19 268.25 16.77 0.00 0.00 205706.58 15291.73 239424.85 00:23:56.630 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.630 Verification LBA range: start 0x0 length 0x400 00:23:56.630 Nvme10n1 : 1.20 265.84 16.62 0.00 0.00 204328.45 15073.28 272629.76 00:23:56.630 =================================================================================================================== 00:23:56.630 Total : 2444.17 152.76 0.00 0.00 238674.85 12997.97 293601.28 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.892 rmmod nvme_tcp 00:23:56.892 rmmod nvme_fabrics 00:23:56.892 rmmod nvme_keyring 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1448659 ']' 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1448659 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1448659 ']' 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1448659 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448659 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448659' 00:23:56.892 killing process with pid 1448659 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1448659 00:23:56.892 14:09:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1448659 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.153 14:09:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.698 00:23:59.698 real 0m17.357s 00:23:59.698 user 0m32.811s 00:23:59.698 sys 0m7.377s 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.698 ************************************ 00:23:59.698 END TEST nvmf_shutdown_tc1 00:23:59.698 ************************************ 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:59.698 ************************************ 00:23:59.698 START TEST nvmf_shutdown_tc2 00:23:59.698 ************************************ 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:59.698 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:59.698 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.698 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:59.699 Found net devices under 0000:31:00.0: cvl_0_0 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:59.699 Found net devices under 0000:31:00.1: cvl_0_1 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.879 ms 00:23:59.699 00:23:59.699 --- 10.0.0.2 ping statistics --- 00:23:59.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.699 rtt min/avg/max/mdev = 0.879/0.879/0.879/0.000 ms 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:23:59.699 00:23:59.699 --- 10.0.0.1 ping statistics --- 00:23:59.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.699 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1450668 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1450668 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1450668 ']' 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.699 14:09:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.699 [2024-07-15 14:09:57.781951] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:59.699 [2024-07-15 14:09:57.782011] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.961 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.961 [2024-07-15 14:09:57.876841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.961 [2024-07-15 14:09:57.938598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.961 [2024-07-15 14:09:57.938627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.961 [2024-07-15 14:09:57.938633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.961 [2024-07-15 14:09:57.938637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.961 [2024-07-15 14:09:57.938641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.961 [2024-07-15 14:09:57.938783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.961 [2024-07-15 14:09:57.938955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.961 [2024-07-15 14:09:57.939086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.961 [2024-07-15 14:09:57.939087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 [2024-07-15 14:09:58.607158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.534 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.795 14:09:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.795 Malloc1 00:24:00.795 [2024-07-15 14:09:58.705726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.795 Malloc2 00:24:00.795 Malloc3 00:24:00.795 Malloc4 00:24:00.795 Malloc5 00:24:00.795 Malloc6 00:24:01.056 Malloc7 00:24:01.056 Malloc8 00:24:01.056 Malloc9 00:24:01.056 Malloc10 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1451055 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1451055 /var/tmp/bdevperf.sock 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1451055 ']' 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.056 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.056 { 00:24:01.056 "params": { 00:24:01.056 "name": "Nvme$subsystem", 00:24:01.056 "trtype": "$TEST_TRANSPORT", 00:24:01.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.056 "adrfam": "ipv4", 00:24:01.056 "trsvcid": "$NVMF_PORT", 00:24:01.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.056 "hdgst": ${hdgst:-false}, 00:24:01.056 "ddgst": ${ddgst:-false} 00:24:01.056 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 [2024-07-15 14:09:59.153589] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:01.057 [2024-07-15 14:09:59.153644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451055 ] 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.057 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.057 { 00:24:01.057 "params": { 00:24:01.057 "name": "Nvme$subsystem", 00:24:01.057 "trtype": "$TEST_TRANSPORT", 00:24:01.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.057 "adrfam": "ipv4", 00:24:01.057 "trsvcid": "$NVMF_PORT", 00:24:01.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.057 "hdgst": ${hdgst:-false}, 00:24:01.057 "ddgst": ${ddgst:-false} 00:24:01.057 }, 00:24:01.057 "method": "bdev_nvme_attach_controller" 00:24:01.057 } 00:24:01.057 EOF 00:24:01.057 )") 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.318 { 00:24:01.318 "params": { 00:24:01.318 "name": "Nvme$subsystem", 00:24:01.318 "trtype": "$TEST_TRANSPORT", 00:24:01.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.318 "adrfam": "ipv4", 00:24:01.318 "trsvcid": "$NVMF_PORT", 00:24:01.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.318 "hdgst": ${hdgst:-false}, 00:24:01.318 "ddgst": ${ddgst:-false} 00:24:01.318 }, 00:24:01.318 "method": "bdev_nvme_attach_controller" 00:24:01.318 } 00:24:01.318 EOF 00:24:01.318 )") 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:01.318 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:01.318 14:09:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.318 "params": { 00:24:01.318 "name": "Nvme1", 00:24:01.318 "trtype": "tcp", 00:24:01.318 "traddr": "10.0.0.2", 00:24:01.318 "adrfam": "ipv4", 00:24:01.318 "trsvcid": "4420", 00:24:01.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.318 "hdgst": false, 00:24:01.318 "ddgst": false 00:24:01.318 }, 00:24:01.318 "method": "bdev_nvme_attach_controller" 00:24:01.318 },{ 00:24:01.318 "params": { 00:24:01.318 "name": "Nvme2", 00:24:01.318 "trtype": "tcp", 00:24:01.318 "traddr": "10.0.0.2", 00:24:01.318 "adrfam": "ipv4", 00:24:01.318 "trsvcid": "4420", 00:24:01.318 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:01.318 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:01.318 "hdgst": false, 00:24:01.318 "ddgst": false 00:24:01.318 }, 00:24:01.318 "method": "bdev_nvme_attach_controller" 00:24:01.318 },{ 00:24:01.318 "params": { 00:24:01.318 "name": "Nvme3", 00:24:01.318 "trtype": "tcp", 00:24:01.318 "traddr": "10.0.0.2", 00:24:01.318 "adrfam": "ipv4", 00:24:01.318 "trsvcid": "4420", 00:24:01.318 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:01.318 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:01.318 "hdgst": false, 00:24:01.318 "ddgst": false 00:24:01.318 }, 00:24:01.318 "method": "bdev_nvme_attach_controller" 00:24:01.318 },{ 00:24:01.318 "params": { 00:24:01.318 "name": "Nvme4", 00:24:01.318 "trtype": "tcp", 00:24:01.318 "traddr": "10.0.0.2", 00:24:01.318 "adrfam": "ipv4", 00:24:01.318 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme5", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme6", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme7", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme8", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme9", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 },{ 00:24:01.319 "params": { 00:24:01.319 "name": "Nvme10", 00:24:01.319 "trtype": "tcp", 00:24:01.319 "traddr": "10.0.0.2", 00:24:01.319 "adrfam": "ipv4", 00:24:01.319 "trsvcid": "4420", 00:24:01.319 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:01.319 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:01.319 "hdgst": false, 00:24:01.319 "ddgst": false 00:24:01.319 }, 00:24:01.319 "method": "bdev_nvme_attach_controller" 00:24:01.319 }' 00:24:01.319 [2024-07-15 14:09:59.220037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.319 [2024-07-15 14:09:59.284599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.233 Running I/O for 10 seconds... 00:24:03.233 14:10:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.233 14:10:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:03.233 14:10:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:03.233 14:10:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.233 14:10:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.233 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.493 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.493 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:03.493 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:03.493 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1451055 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1451055 ']' 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1451055 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1451055 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1451055' 00:24:03.754 killing process with pid 1451055 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1451055 00:24:03.754 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1451055 00:24:03.754 Received shutdown signal, test time was about 0.960669 seconds 00:24:03.754 00:24:03.754 Latency(us) 00:24:03.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme1n1 : 0.95 268.39 16.77 0.00 0.00 235509.12 26760.53 239424.85 00:24:03.754 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme2n1 : 0.93 206.11 12.88 0.00 0.00 299929.60 36481.71 234181.97 00:24:03.754 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme3n1 : 0.92 208.89 13.06 0.00 0.00 289738.81 16602.45 253405.87 00:24:03.754 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme4n1 : 0.95 270.25 16.89 0.00 0.00 219421.65 15510.19 253405.87 00:24:03.754 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme5n1 : 0.94 204.67 12.79 0.00 0.00 282001.64 16384.00 251658.24 00:24:03.754 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme6n1 : 0.96 267.45 16.72 0.00 0.00 212252.80 15073.28 232434.35 00:24:03.754 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme7n1 : 0.93 206.41 12.90 0.00 0.00 266971.02 33204.91 251658.24 00:24:03.754 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme8n1 : 0.95 269.22 16.83 0.00 0.00 201000.32 16930.13 253405.87 00:24:03.754 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme9n1 : 0.96 266.73 16.67 0.00 0.00 198317.65 19988.48 255153.49 00:24:03.754 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.754 Verification LBA range: start 0x0 length 0x400 00:24:03.754 Nvme10n1 : 0.94 203.60 12.73 0.00 0.00 252454.12 19223.89 270882.13 00:24:03.754 =================================================================================================================== 00:24:03.754 Total : 2371.72 148.23 0.00 0.00 241122.62 15073.28 270882.13 00:24:04.014 14:10:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1450668 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.039 14:10:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.039 rmmod nvme_tcp 00:24:05.039 rmmod nvme_fabrics 00:24:05.039 rmmod nvme_keyring 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1450668 ']' 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1450668 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1450668 ']' 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1450668 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1450668 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1450668' 00:24:05.039 killing process with pid 1450668 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1450668 00:24:05.039 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1450668 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.300 14:10:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.844 00:24:07.844 real 0m8.032s 00:24:07.844 user 0m24.453s 00:24:07.844 sys 0m1.239s 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:07.844 ************************************ 00:24:07.844 END TEST nvmf_shutdown_tc2 00:24:07.844 ************************************ 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:07.844 ************************************ 00:24:07.844 START TEST nvmf_shutdown_tc3 00:24:07.844 ************************************ 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.844 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:07.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:07.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:07.845 Found net devices under 0000:31:00.0: cvl_0_0 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:07.845 Found net devices under 0000:31:00.1: cvl_0_1 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:24:07.845 00:24:07.845 --- 10.0.0.2 ping statistics --- 00:24:07.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.845 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:07.845 00:24:07.845 --- 10.0.0.1 ping statistics --- 00:24:07.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.845 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:07.845 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1452412 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1452412 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1452412 ']' 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.846 14:10:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.846 [2024-07-15 14:10:05.916623] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:07.846 [2024-07-15 14:10:05.916681] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.846 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.106 [2024-07-15 14:10:06.012043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.106 [2024-07-15 14:10:06.073463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.107 [2024-07-15 14:10:06.073498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.107 [2024-07-15 14:10:06.073505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.107 [2024-07-15 14:10:06.073509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.107 [2024-07-15 14:10:06.073513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.107 [2024-07-15 14:10:06.073622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.107 [2024-07-15 14:10:06.073791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.107 [2024-07-15 14:10:06.073907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.107 [2024-07-15 14:10:06.073910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.678 [2024-07-15 14:10:06.742941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.678 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.938 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.938 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.938 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.939 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.939 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:08.939 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.939 14:10:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.939 Malloc1 00:24:08.939 [2024-07-15 14:10:06.841476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.939 Malloc2 00:24:08.939 Malloc3 00:24:08.939 Malloc4 00:24:08.939 Malloc5 00:24:08.939 Malloc6 00:24:08.939 Malloc7 00:24:09.200 Malloc8 00:24:09.200 Malloc9 00:24:09.200 Malloc10 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1452611 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1452611 /var/tmp/bdevperf.sock 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1452611 ']' 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.200 [2024-07-15 14:10:07.285074] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:09.200 [2024-07-15 14:10:07.285163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452611 ] 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.200 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.200 { 00:24:09.200 "params": { 00:24:09.200 "name": "Nvme$subsystem", 00:24:09.200 "trtype": "$TEST_TRANSPORT", 00:24:09.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.200 "adrfam": "ipv4", 00:24:09.200 "trsvcid": "$NVMF_PORT", 00:24:09.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.200 "hdgst": ${hdgst:-false}, 00:24:09.200 "ddgst": ${ddgst:-false} 00:24:09.200 }, 00:24:09.200 "method": "bdev_nvme_attach_controller" 00:24:09.200 } 00:24:09.200 EOF 00:24:09.200 )") 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.201 { 00:24:09.201 "params": { 00:24:09.201 "name": "Nvme$subsystem", 00:24:09.201 "trtype": "$TEST_TRANSPORT", 00:24:09.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.201 "adrfam": "ipv4", 00:24:09.201 "trsvcid": "$NVMF_PORT", 00:24:09.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.201 "hdgst": ${hdgst:-false}, 00:24:09.201 "ddgst": ${ddgst:-false} 00:24:09.201 }, 00:24:09.201 "method": "bdev_nvme_attach_controller" 00:24:09.201 } 00:24:09.201 EOF 00:24:09.201 )") 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:09.201 { 00:24:09.201 "params": { 00:24:09.201 "name": "Nvme$subsystem", 00:24:09.201 "trtype": "$TEST_TRANSPORT", 00:24:09.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.201 "adrfam": "ipv4", 00:24:09.201 "trsvcid": "$NVMF_PORT", 00:24:09.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.201 "hdgst": ${hdgst:-false}, 00:24:09.201 "ddgst": ${ddgst:-false} 00:24:09.201 }, 00:24:09.201 "method": "bdev_nvme_attach_controller" 00:24:09.201 } 00:24:09.201 EOF 00:24:09.201 )") 00:24:09.201 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:09.462 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:09.462 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:09.462 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.462 14:10:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme1", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme2", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme3", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme4", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme5", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme6", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.462 }, 00:24:09.462 "method": "bdev_nvme_attach_controller" 00:24:09.462 },{ 00:24:09.462 "params": { 00:24:09.462 "name": "Nvme7", 00:24:09.462 "trtype": "tcp", 00:24:09.462 "traddr": "10.0.0.2", 00:24:09.462 "adrfam": "ipv4", 00:24:09.462 "trsvcid": "4420", 00:24:09.462 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:09.462 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:09.462 "hdgst": false, 00:24:09.462 "ddgst": false 00:24:09.463 }, 00:24:09.463 "method": "bdev_nvme_attach_controller" 00:24:09.463 },{ 00:24:09.463 "params": { 00:24:09.463 "name": "Nvme8", 00:24:09.463 "trtype": "tcp", 00:24:09.463 "traddr": "10.0.0.2", 00:24:09.463 "adrfam": "ipv4", 00:24:09.463 "trsvcid": "4420", 00:24:09.463 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:09.463 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:09.463 "hdgst": false, 00:24:09.463 "ddgst": false 00:24:09.463 }, 00:24:09.463 "method": "bdev_nvme_attach_controller" 00:24:09.463 },{ 00:24:09.463 "params": { 00:24:09.463 "name": "Nvme9", 00:24:09.463 "trtype": "tcp", 00:24:09.463 "traddr": "10.0.0.2", 00:24:09.463 "adrfam": "ipv4", 00:24:09.463 "trsvcid": "4420", 00:24:09.463 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:09.463 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:09.463 "hdgst": false, 00:24:09.463 "ddgst": false 00:24:09.463 }, 00:24:09.463 "method": "bdev_nvme_attach_controller" 00:24:09.463 },{ 00:24:09.463 "params": { 00:24:09.463 "name": "Nvme10", 00:24:09.463 "trtype": "tcp", 00:24:09.463 "traddr": "10.0.0.2", 00:24:09.463 "adrfam": "ipv4", 00:24:09.463 "trsvcid": "4420", 00:24:09.463 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:09.463 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:09.463 "hdgst": false, 00:24:09.463 "ddgst": false 00:24:09.463 }, 00:24:09.463 "method": "bdev_nvme_attach_controller" 00:24:09.463 }' 00:24:09.463 [2024-07-15 14:10:07.356121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.463 [2024-07-15 14:10:07.420926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.847 Running I/O for 10 seconds... 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.847 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.110 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:11.110 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:11.110 14:10:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:11.371 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:11.372 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:11.648 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:11.648 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:11.648 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1452412 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1452412 ']' 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1452412 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1452412 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1452412' 00:24:11.649 killing process with pid 1452412 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1452412 00:24:11.649 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1452412 00:24:11.649 [2024-07-15 14:10:09.633014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.633354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c6f0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.649 [2024-07-15 14:10:09.634421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.634671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f0d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.635979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f45d0 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.650 [2024-07-15 14:10:09.636180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with [2024-07-15 14:10:09.636186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:11.650 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.650 [2024-07-15 14:10:09.636193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf000 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.650 [2024-07-15 14:10:09.636239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156cb90 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.636783] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.651 [2024-07-15 14:10:09.637604] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.651 [2024-07-15 14:10:09.640021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.651 [2024-07-15 14:10:09.640240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.640325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d050 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.641524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d4f0 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.652 [2024-07-15 14:10:09.642199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.642438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156d990 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.653 [2024-07-15 14:10:09.643275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.643479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156de30 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.644687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.654 [2024-07-15 14:10:09.652862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.652930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156e790 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.653668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156ec30 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.655350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7c10 is same with the state(5) to be set 00:24:11.655 [2024-07-15 14:10:09.655451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.655 [2024-07-15 14:10:09.655474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.655 [2024-07-15 14:10:09.655482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8e20 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392cb0 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1216650 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f45d0 (9): Bad file descriptor 00:24:11.656 [2024-07-15 14:10:09.655728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bffd0 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1230970 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236bc0 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.655983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.655991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.655999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.656006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.656021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.656 [2024-07-15 14:10:09.656036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7610 is same with the state(5) to be set 00:24:11.656 [2024-07-15 14:10:09.656060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf000 (9): Bad file descriptor 00:24:11.656 [2024-07-15 14:10:09.656109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.656 [2024-07-15 14:10:09.656319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.656 [2024-07-15 14:10:09.656329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.657 [2024-07-15 14:10:09.656820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.657 [2024-07-15 14:10:09.656829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.656984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.656993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1347e60 was disconnected and freed. reset controller. 00:24:11.658 [2024-07-15 14:10:09.657458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.658 [2024-07-15 14:10:09.657701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.658 [2024-07-15 14:10:09.657710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.657988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.657997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.658029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.658045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.658063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.658079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.658086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.659 [2024-07-15 14:10:09.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.659 [2024-07-15 14:10:09.664808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.664988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.664995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.665062] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134b310 was disconnected and freed. reset controller. 00:24:11.660 [2024-07-15 14:10:09.667937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:11.660 [2024-07-15 14:10:09.667975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bffd0 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.667992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c7c10 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8e20 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392cb0 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1216650 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1230970 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1236bc0 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7610 (9): Bad file descriptor 00:24:11.660 [2024-07-15 14:10:09.668420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:11.660 [2024-07-15 14:10:09.668488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.660 [2024-07-15 14:10:09.668793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.660 [2024-07-15 14:10:09.668802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.668990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.668999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.661 [2024-07-15 14:10:09.669376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.661 [2024-07-15 14:10:09.669383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.669546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.669554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac420 is same with the state(5) to be set 00:24:11.662 [2024-07-15 14:10:09.671111] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671157] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671194] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671234] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671528] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671569] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.662 [2024-07-15 14:10:09.671601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.662 [2024-07-15 14:10:09.671847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.662 [2024-07-15 14:10:09.671854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.671991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.671998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.663 [2024-07-15 14:10:09.672433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.663 [2024-07-15 14:10:09.672441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.672658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ef70 is same with the state(5) to be set 00:24:11.664 [2024-07-15 14:10:09.675483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.664 [2024-07-15 14:10:09.675509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:11.664 [2024-07-15 14:10:09.675897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.664 [2024-07-15 14:10:09.675911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bffd0 with addr=10.0.0.2, port=4420 00:24:11.664 [2024-07-15 14:10:09.675920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bffd0 is same with the state(5) to be set 00:24:11.664 [2024-07-15 14:10:09.676268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.664 [2024-07-15 14:10:09.676278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf7610 with addr=10.0.0.2, port=4420 00:24:11.664 [2024-07-15 14:10:09.676285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7610 is same with the state(5) to be set 00:24:11.664 [2024-07-15 14:10:09.676387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.664 [2024-07-15 14:10:09.676676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.664 [2024-07-15 14:10:09.676683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.676986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.676995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.665 [2024-07-15 14:10:09.677290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.665 [2024-07-15 14:10:09.677297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.677444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.677452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134da70 is same with the state(5) to be set 00:24:11.666 [2024-07-15 14:10:09.677500] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134da70 was disconnected and freed. reset controller. 00:24:11.666 [2024-07-15 14:10:09.677917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.666 [2024-07-15 14:10:09.677929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f45d0 with addr=10.0.0.2, port=4420 00:24:11.666 [2024-07-15 14:10:09.677936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f45d0 is same with the state(5) to be set 00:24:11.666 [2024-07-15 14:10:09.678113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.666 [2024-07-15 14:10:09.678122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bf000 with addr=10.0.0.2, port=4420 00:24:11.666 [2024-07-15 14:10:09.678129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf000 is same with the state(5) to be set 00:24:11.666 [2024-07-15 14:10:09.678140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bffd0 (9): Bad file descriptor 00:24:11.666 [2024-07-15 14:10:09.678149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7610 (9): Bad file descriptor 00:24:11.666 [2024-07-15 14:10:09.679914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:11.666 [2024-07-15 14:10:09.679939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f45d0 (9): Bad file descriptor 00:24:11.666 [2024-07-15 14:10:09.679949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf000 (9): Bad file descriptor 00:24:11.666 [2024-07-15 14:10:09.679957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:11.666 [2024-07-15 14:10:09.679964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:11.666 [2024-07-15 14:10:09.679976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:11.666 [2024-07-15 14:10:09.679988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:11.666 [2024-07-15 14:10:09.679994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:11.666 [2024-07-15 14:10:09.680001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:11.666 [2024-07-15 14:10:09.680051] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.666 [2024-07-15 14:10:09.680062] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.666 [2024-07-15 14:10:09.680125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.666 [2024-07-15 14:10:09.680134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.666 [2024-07-15 14:10:09.680484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.666 [2024-07-15 14:10:09.680496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8e20 with addr=10.0.0.2, port=4420 00:24:11.666 [2024-07-15 14:10:09.680503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8e20 is same with the state(5) to be set 00:24:11.666 [2024-07-15 14:10:09.680510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.666 [2024-07-15 14:10:09.680516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.666 [2024-07-15 14:10:09.680523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.666 [2024-07-15 14:10:09.680535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:11.666 [2024-07-15 14:10:09.680541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:11.666 [2024-07-15 14:10:09.680548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:11.666 [2024-07-15 14:10:09.680592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.666 [2024-07-15 14:10:09.680804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.666 [2024-07-15 14:10:09.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.680985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.680992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.667 [2024-07-15 14:10:09.681287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.667 [2024-07-15 14:10:09.681296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.681650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.681658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1349230 is same with the state(5) to be set 00:24:11.668 [2024-07-15 14:10:09.682931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.682943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.682961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.682971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.682978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.682987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.682997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.668 [2024-07-15 14:10:09.683169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.668 [2024-07-15 14:10:09.683176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.669 [2024-07-15 14:10:09.683788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.669 [2024-07-15 14:10:09.683797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.683993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eed90 is same with the state(5) to be set 00:24:11.670 [2024-07-15 14:10:09.685261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.670 [2024-07-15 14:10:09.685654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.670 [2024-07-15 14:10:09.685663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.685985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.685994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.671 [2024-07-15 14:10:09.686216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.671 [2024-07-15 14:10:09.686225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.686313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.686321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0220 is same with the state(5) to be set 00:24:11.672 [2024-07-15 14:10:09.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.687987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.672 [2024-07-15 14:10:09.687993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.672 [2024-07-15 14:10:09.688003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.673 [2024-07-15 14:10:09.688601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.673 [2024-07-15 14:10:09.688610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.688617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.688627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.688635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.688644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.688651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.688659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0a40 is same with the state(5) to be set 00:24:11.674 [2024-07-15 14:10:09.689938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.689952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.689963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.689979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.689986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.674 [2024-07-15 14:10:09.690465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.674 [2024-07-15 14:10:09.690474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.690992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.675 [2024-07-15 14:10:09.690999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.675 [2024-07-15 14:10:09.691007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134c7c0 is same with the state(5) to be set 00:24:11.675 [2024-07-15 14:10:09.692745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.675 [2024-07-15 14:10:09.692774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.675 [2024-07-15 14:10:09.692782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:11.675 [2024-07-15 14:10:09.692793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:11.675 [2024-07-15 14:10:09.692803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:11.675 [2024-07-15 14:10:09.692839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8e20 (9): Bad file descriptor 00:24:11.675 [2024-07-15 14:10:09.692876] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.675 [2024-07-15 14:10:09.692888] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.675 [2024-07-15 14:10:09.692912] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.675 [2024-07-15 14:10:09.692976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:11.675 task offset: 24192 on job bdev=Nvme2n1 fails 00:24:11.675 00:24:11.675 Latency(us) 00:24:11.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.676 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme1n1 ended in about 0.95 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme1n1 : 0.95 134.90 8.43 67.45 0.00 312800.14 18568.53 258648.75 00:24:11.676 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme2n1 : 0.94 200.09 12.51 67.75 0.00 231333.45 10704.21 239424.85 00:24:11.676 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme3n1 ended in about 0.96 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme3n1 : 0.96 199.80 12.49 66.60 0.00 227823.36 19988.48 253405.87 00:24:11.676 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme4n1 ended in about 0.96 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme4n1 : 0.96 199.32 12.46 66.44 0.00 223606.83 12779.52 248162.99 00:24:11.676 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme5n1 ended in about 0.97 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme5n1 : 0.97 132.56 8.29 66.28 0.00 292523.24 35826.35 249910.61 00:24:11.676 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme6n1 ended in about 0.97 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme6n1 : 0.97 132.24 8.27 66.12 0.00 286927.93 20097.71 253405.87 00:24:11.676 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme7n1 ended in about 0.95 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme7n1 : 0.95 202.97 12.69 67.66 0.00 204779.63 11632.64 255153.49 00:24:11.676 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme8n1 ended in about 0.97 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme8n1 : 0.97 197.88 12.37 65.96 0.00 206021.33 15837.87 232434.35 00:24:11.676 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme9n1 : 0.96 200.47 12.53 66.82 0.00 198014.19 13489.49 230686.72 00:24:11.676 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.676 Job: Nvme10n1 ended in about 0.95 seconds with error 00:24:11.676 Verification LBA range: start 0x0 length 0x400 00:24:11.676 Nvme10n1 : 0.95 134.46 8.40 67.23 0.00 255883.38 17367.04 272629.76 00:24:11.676 =================================================================================================================== 00:24:11.676 Total : 1734.70 108.42 668.32 0.00 239196.88 10704.21 272629.76 00:24:11.676 [2024-07-15 14:10:09.717786] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:11.676 [2024-07-15 14:10:09.717821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:11.676 [2024-07-15 14:10:09.718250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.718266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c7c10 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.718277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7c10 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.718612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.718622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1230970 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.718629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1230970 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.718958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.718968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1236bc0 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.718975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236bc0 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.718983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:11.676 [2024-07-15 14:10:09.718989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:11.676 [2024-07-15 14:10:09.718997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:11.676 [2024-07-15 14:10:09.720352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:11.676 [2024-07-15 14:10:09.720365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.676 [2024-07-15 14:10:09.720375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:11.676 [2024-07-15 14:10:09.720388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:11.676 [2024-07-15 14:10:09.720397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.676 [2024-07-15 14:10:09.720781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.720793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1216650 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.720800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1216650 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.721159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.721168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1392cb0 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.721176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392cb0 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.721187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c7c10 (9): Bad file descriptor 00:24:11.676 [2024-07-15 14:10:09.721197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1230970 (9): Bad file descriptor 00:24:11.676 [2024-07-15 14:10:09.721206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1236bc0 (9): Bad file descriptor 00:24:11.676 [2024-07-15 14:10:09.721242] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.676 [2024-07-15 14:10:09.721256] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.676 [2024-07-15 14:10:09.721267] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.676 [2024-07-15 14:10:09.721616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.721629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bf000 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.721636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf000 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.721936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.721946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f45d0 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.721953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f45d0 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.722295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.722304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf7610 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.722311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7610 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.722513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.676 [2024-07-15 14:10:09.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bffd0 with addr=10.0.0.2, port=4420 00:24:11.676 [2024-07-15 14:10:09.722529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bffd0 is same with the state(5) to be set 00:24:11.676 [2024-07-15 14:10:09.722538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1216650 (9): Bad file descriptor 00:24:11.676 [2024-07-15 14:10:09.722547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392cb0 (9): Bad file descriptor 00:24:11.676 [2024-07-15 14:10:09.722555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:11.676 [2024-07-15 14:10:09.722561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:11.676 [2024-07-15 14:10:09.722571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:11.676 [2024-07-15 14:10:09.722582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:11.676 [2024-07-15 14:10:09.722588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:11.676 [2024-07-15 14:10:09.722595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:11.676 [2024-07-15 14:10:09.722605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:11.676 [2024-07-15 14:10:09.722611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:11.676 [2024-07-15 14:10:09.722618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:11.676 [2024-07-15 14:10:09.722687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:11.676 [2024-07-15 14:10:09.722697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.722703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.722709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.722722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf000 (9): Bad file descriptor 00:24:11.677 [2024-07-15 14:10:09.722731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f45d0 (9): Bad file descriptor 00:24:11.677 [2024-07-15 14:10:09.722741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7610 (9): Bad file descriptor 00:24:11.677 [2024-07-15 14:10:09.722749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bffd0 (9): Bad file descriptor 00:24:11.677 [2024-07-15 14:10:09.722762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.722768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.722774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:11.677 [2024-07-15 14:10:09.722784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.722790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.722796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:11.677 [2024-07-15 14:10:09.722823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.722829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.723046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.677 [2024-07-15 14:10:09.723057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8e20 with addr=10.0.0.2, port=4420 00:24:11.677 [2024-07-15 14:10:09.723064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8e20 is same with the state(5) to be set 00:24:11.677 [2024-07-15 14:10:09.723071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.723077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.723084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:11.677 [2024-07-15 14:10:09.723093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.723103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.723109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.677 [2024-07-15 14:10:09.723119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.723125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.723131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:11.677 [2024-07-15 14:10:09.723140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.723146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.723153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:11.677 [2024-07-15 14:10:09.723181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.723188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.723194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.723199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.677 [2024-07-15 14:10:09.723207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8e20 (9): Bad file descriptor 00:24:11.677 [2024-07-15 14:10:09.723235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:11.677 [2024-07-15 14:10:09.723242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:11.677 [2024-07-15 14:10:09.723249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:11.677 [2024-07-15 14:10:09.723276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.939 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:11.939 14:10:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1452611 00:24:12.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1452611) - No such process 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.884 rmmod nvme_tcp 00:24:12.884 rmmod nvme_fabrics 00:24:12.884 rmmod nvme_keyring 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.884 14:10:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.432 00:24:15.432 real 0m7.593s 00:24:15.432 user 0m17.981s 00:24:15.432 sys 0m1.223s 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 ************************************ 00:24:15.432 END TEST nvmf_shutdown_tc3 00:24:15.432 ************************************ 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:15.432 00:24:15.432 real 0m33.355s 00:24:15.432 user 1m15.396s 00:24:15.432 sys 0m10.083s 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.432 14:10:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 ************************************ 00:24:15.432 END TEST nvmf_shutdown 00:24:15.432 ************************************ 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:15.432 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:15.432 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.432 14:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 ************************************ 00:24:15.432 START TEST nvmf_multicontroller 00:24:15.432 ************************************ 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:15.432 * Looking for test storage... 00:24:15.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.432 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.433 14:10:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:23.573 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:23.573 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:23.573 Found net devices under 0000:31:00.0: cvl_0_0 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:23.573 Found net devices under 0000:31:00.1: cvl_0_1 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:24:23.573 00:24:23.573 --- 10.0.0.2 ping statistics --- 00:24:23.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.573 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:24:23.573 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:24:23.573 00:24:23.573 --- 10.0.0.1 ping statistics --- 00:24:23.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.574 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1458090 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1458090 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1458090 ']' 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.574 14:10:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:23.574 [2024-07-15 14:10:21.647932] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:23.574 [2024-07-15 14:10:21.647992] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.836 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.836 [2024-07-15 14:10:21.744198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:23.836 [2024-07-15 14:10:21.839081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.836 [2024-07-15 14:10:21.839140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.836 [2024-07-15 14:10:21.839149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.836 [2024-07-15 14:10:21.839155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.836 [2024-07-15 14:10:21.839161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.836 [2024-07-15 14:10:21.839303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.836 [2024-07-15 14:10:21.839466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.836 [2024-07-15 14:10:21.839467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.409 [2024-07-15 14:10:22.477475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.409 Malloc0 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.409 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 [2024-07-15 14:10:22.547551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 [2024-07-15 14:10:22.559487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 Malloc1 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1458338 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1458338 /var/tmp/bdevperf.sock 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1458338 ']' 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.671 14:10:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.617 NVMe0n1 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.617 1 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:25.617 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 request: 00:24:25.618 { 00:24:25.618 "name": "NVMe0", 00:24:25.618 "trtype": "tcp", 00:24:25.618 "traddr": "10.0.0.2", 00:24:25.618 "adrfam": "ipv4", 00:24:25.618 "trsvcid": "4420", 00:24:25.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.618 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:25.618 "hostaddr": "10.0.0.2", 00:24:25.618 "hostsvcid": "60000", 00:24:25.618 "prchk_reftag": false, 00:24:25.618 "prchk_guard": false, 00:24:25.618 "hdgst": false, 00:24:25.618 "ddgst": false, 00:24:25.618 "method": "bdev_nvme_attach_controller", 00:24:25.618 "req_id": 1 00:24:25.618 } 00:24:25.618 Got JSON-RPC error response 00:24:25.618 response: 00:24:25.618 { 00:24:25.618 "code": -114, 00:24:25.618 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:25.618 } 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 request: 00:24:25.618 { 00:24:25.618 "name": "NVMe0", 00:24:25.618 "trtype": "tcp", 00:24:25.618 "traddr": "10.0.0.2", 00:24:25.618 "adrfam": "ipv4", 00:24:25.618 "trsvcid": "4420", 00:24:25.618 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:25.618 "hostaddr": "10.0.0.2", 00:24:25.618 "hostsvcid": "60000", 00:24:25.618 "prchk_reftag": false, 00:24:25.618 "prchk_guard": false, 00:24:25.618 "hdgst": false, 00:24:25.618 "ddgst": false, 00:24:25.618 "method": "bdev_nvme_attach_controller", 00:24:25.618 "req_id": 1 00:24:25.618 } 00:24:25.618 Got JSON-RPC error response 00:24:25.618 response: 00:24:25.618 { 00:24:25.618 "code": -114, 00:24:25.618 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:25.618 } 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 request: 00:24:25.618 { 00:24:25.618 "name": "NVMe0", 00:24:25.618 "trtype": "tcp", 00:24:25.618 "traddr": "10.0.0.2", 00:24:25.618 "adrfam": "ipv4", 00:24:25.618 "trsvcid": "4420", 00:24:25.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.618 "hostaddr": "10.0.0.2", 00:24:25.618 "hostsvcid": "60000", 00:24:25.618 "prchk_reftag": false, 00:24:25.618 "prchk_guard": false, 00:24:25.618 "hdgst": false, 00:24:25.618 "ddgst": false, 00:24:25.618 "multipath": "disable", 00:24:25.618 "method": "bdev_nvme_attach_controller", 00:24:25.618 "req_id": 1 00:24:25.618 } 00:24:25.618 Got JSON-RPC error response 00:24:25.618 response: 00:24:25.618 { 00:24:25.618 "code": -114, 00:24:25.618 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:25.618 } 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 request: 00:24:25.618 { 00:24:25.618 "name": "NVMe0", 00:24:25.618 "trtype": "tcp", 00:24:25.618 "traddr": "10.0.0.2", 00:24:25.618 "adrfam": "ipv4", 00:24:25.618 "trsvcid": "4420", 00:24:25.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.618 "hostaddr": "10.0.0.2", 00:24:25.618 "hostsvcid": "60000", 00:24:25.618 "prchk_reftag": false, 00:24:25.618 "prchk_guard": false, 00:24:25.618 "hdgst": false, 00:24:25.618 "ddgst": false, 00:24:25.618 "multipath": "failover", 00:24:25.618 "method": "bdev_nvme_attach_controller", 00:24:25.618 "req_id": 1 00:24:25.618 } 00:24:25.618 Got JSON-RPC error response 00:24:25.618 response: 00:24:25.618 { 00:24:25.618 "code": -114, 00:24:25.618 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:25.618 } 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.618 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.619 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.619 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.619 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.619 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.880 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.880 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:25.880 14:10:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.267 0 00:24:27.267 14:10:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:27.267 14:10:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.267 14:10:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1458338 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1458338 ']' 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1458338 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458338 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.267 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458338' 00:24:27.268 killing process with pid 1458338 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1458338 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1458338 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:27.268 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:27.268 [2024-07-15 14:10:22.677578] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:27.268 [2024-07-15 14:10:22.677634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458338 ] 00:24:27.268 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.268 [2024-07-15 14:10:22.743002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.268 [2024-07-15 14:10:22.807391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.268 [2024-07-15 14:10:23.856637] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bd0871a7-f46a-4242-b707-8985b1ec5422 already exists 00:24:27.268 [2024-07-15 14:10:23.856668] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:bd0871a7-f46a-4242-b707-8985b1ec5422 alias for bdev NVMe1n1 00:24:27.268 [2024-07-15 14:10:23.856676] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:27.268 Running I/O for 1 seconds... 00:24:27.268 00:24:27.268 Latency(us) 00:24:27.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.268 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:27.268 NVMe0n1 : 1.00 21042.41 82.20 0.00 0.00 6069.01 3713.71 14636.37 00:24:27.268 =================================================================================================================== 00:24:27.268 Total : 21042.41 82.20 0.00 0.00 6069.01 3713.71 14636.37 00:24:27.268 Received shutdown signal, test time was about 1.000000 seconds 00:24:27.268 00:24:27.268 Latency(us) 00:24:27.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.268 =================================================================================================================== 00:24:27.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.268 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.268 rmmod nvme_tcp 00:24:27.268 rmmod nvme_fabrics 00:24:27.268 rmmod nvme_keyring 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1458090 ']' 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1458090 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1458090 ']' 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1458090 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458090 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458090' 00:24:27.268 killing process with pid 1458090 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1458090 00:24:27.268 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1458090 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.529 14:10:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.077 14:10:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.077 00:24:30.077 real 0m14.386s 00:24:30.077 user 0m16.305s 00:24:30.077 sys 0m6.764s 00:24:30.077 14:10:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.077 14:10:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:30.077 ************************************ 00:24:30.077 END TEST nvmf_multicontroller 00:24:30.077 ************************************ 00:24:30.077 14:10:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.077 14:10:27 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.077 14:10:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.077 14:10:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.077 14:10:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.077 ************************************ 00:24:30.077 START TEST nvmf_aer 00:24:30.077 ************************************ 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.077 * Looking for test storage... 00:24:30.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.077 14:10:27 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.078 14:10:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.249 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:38.250 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:38.250 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:38.250 Found net devices under 0000:31:00.0: cvl_0_0 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:38.250 Found net devices under 0000:31:00.1: cvl_0_1 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:24:38.250 00:24:38.250 --- 10.0.0.2 ping statistics --- 00:24:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.250 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:38.250 00:24:38.250 --- 10.0.0.1 ping statistics --- 00:24:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.250 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1463368 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1463368 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1463368 ']' 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.250 14:10:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.250 [2024-07-15 14:10:35.528873] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:38.250 [2024-07-15 14:10:35.528938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.250 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.250 [2024-07-15 14:10:35.608542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.250 [2024-07-15 14:10:35.684173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.250 [2024-07-15 14:10:35.684212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.250 [2024-07-15 14:10:35.684220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.250 [2024-07-15 14:10:35.684226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.250 [2024-07-15 14:10:35.684232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.250 [2024-07-15 14:10:35.684371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.250 [2024-07-15 14:10:35.684491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.250 [2024-07-15 14:10:35.684654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.250 [2024-07-15 14:10:35.684655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.250 [2024-07-15 14:10:36.353317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.250 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.511 Malloc0 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.511 [2024-07-15 14:10:36.412564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.511 [ 00:24:38.511 { 00:24:38.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.511 "subtype": "Discovery", 00:24:38.511 "listen_addresses": [], 00:24:38.511 "allow_any_host": true, 00:24:38.511 "hosts": [] 00:24:38.511 }, 00:24:38.511 { 00:24:38.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.511 "subtype": "NVMe", 00:24:38.511 "listen_addresses": [ 00:24:38.511 { 00:24:38.511 "trtype": "TCP", 00:24:38.511 "adrfam": "IPv4", 00:24:38.511 "traddr": "10.0.0.2", 00:24:38.511 "trsvcid": "4420" 00:24:38.511 } 00:24:38.511 ], 00:24:38.511 "allow_any_host": true, 00:24:38.511 "hosts": [], 00:24:38.511 "serial_number": "SPDK00000000000001", 00:24:38.511 "model_number": "SPDK bdev Controller", 00:24:38.511 "max_namespaces": 2, 00:24:38.511 "min_cntlid": 1, 00:24:38.511 "max_cntlid": 65519, 00:24:38.511 "namespaces": [ 00:24:38.511 { 00:24:38.511 "nsid": 1, 00:24:38.511 "bdev_name": "Malloc0", 00:24:38.511 "name": "Malloc0", 00:24:38.511 "nguid": "FA156A6AFBE44E83995A938CBA9D42F0", 00:24:38.511 "uuid": "fa156a6a-fbe4-4e83-995a-938cba9d42f0" 00:24:38.511 } 00:24:38.511 ] 00:24:38.511 } 00:24:38.511 ] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1463721 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:38.511 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:38.511 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.770 Malloc1 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.770 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.770 Asynchronous Event Request test 00:24:38.770 Attaching to 10.0.0.2 00:24:38.770 Attached to 10.0.0.2 00:24:38.770 Registering asynchronous event callbacks... 00:24:38.770 Starting namespace attribute notice tests for all controllers... 00:24:38.770 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:38.770 aer_cb - Changed Namespace 00:24:38.770 Cleaning up... 00:24:38.770 [ 00:24:38.770 { 00:24:38.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.770 "subtype": "Discovery", 00:24:38.770 "listen_addresses": [], 00:24:38.770 "allow_any_host": true, 00:24:38.770 "hosts": [] 00:24:38.770 }, 00:24:38.770 { 00:24:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.770 "subtype": "NVMe", 00:24:38.770 "listen_addresses": [ 00:24:38.770 { 00:24:38.770 "trtype": "TCP", 00:24:38.770 "adrfam": "IPv4", 00:24:38.770 "traddr": "10.0.0.2", 00:24:38.770 "trsvcid": "4420" 00:24:38.770 } 00:24:38.770 ], 00:24:38.770 "allow_any_host": true, 00:24:38.770 "hosts": [], 00:24:38.770 "serial_number": "SPDK00000000000001", 00:24:38.770 "model_number": "SPDK bdev Controller", 00:24:38.770 "max_namespaces": 2, 00:24:38.770 "min_cntlid": 1, 00:24:38.770 "max_cntlid": 65519, 00:24:38.770 "namespaces": [ 00:24:38.770 { 00:24:38.770 "nsid": 1, 00:24:38.770 "bdev_name": "Malloc0", 00:24:38.770 "name": "Malloc0", 00:24:38.770 "nguid": "FA156A6AFBE44E83995A938CBA9D42F0", 00:24:38.770 "uuid": "fa156a6a-fbe4-4e83-995a-938cba9d42f0" 00:24:38.770 }, 00:24:38.770 { 00:24:38.771 "nsid": 2, 00:24:38.771 "bdev_name": "Malloc1", 00:24:38.771 "name": "Malloc1", 00:24:38.771 "nguid": "6EDB970003B449ACA30AC00A9BF316BD", 00:24:38.771 "uuid": "6edb9700-03b4-49ac-a30a-c00a9bf316bd" 00:24:38.771 } 00:24:38.771 ] 00:24:38.771 } 00:24:38.771 ] 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1463721 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.771 rmmod nvme_tcp 00:24:38.771 rmmod nvme_fabrics 00:24:38.771 rmmod nvme_keyring 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1463368 ']' 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1463368 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1463368 ']' 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1463368 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463368 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463368' 00:24:38.771 killing process with pid 1463368 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1463368 00:24:38.771 14:10:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1463368 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.030 14:10:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.576 14:10:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.576 00:24:41.576 real 0m11.404s 00:24:41.576 user 0m7.397s 00:24:41.576 sys 0m6.190s 00:24:41.576 14:10:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:41.576 14:10:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.576 ************************************ 00:24:41.576 END TEST nvmf_aer 00:24:41.576 ************************************ 00:24:41.576 14:10:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:41.576 14:10:39 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.576 14:10:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:41.576 14:10:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.576 14:10:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.576 ************************************ 00:24:41.576 START TEST nvmf_async_init 00:24:41.576 ************************************ 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.576 * Looking for test storage... 00:24:41.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=94bf51a4d67749f883c7f2af9973cfcc 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.576 14:10:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.734 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:49.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:49.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:49.735 Found net devices under 0000:31:00.0: cvl_0_0 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:49.735 Found net devices under 0000:31:00.1: cvl_0_1 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.735 14:10:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:49.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:49.735 00:24:49.735 --- 10.0.0.2 ping statistics --- 00:24:49.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.735 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:24:49.735 00:24:49.735 --- 10.0.0.1 ping statistics --- 00:24:49.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.735 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1468400 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1468400 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1468400 ']' 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.735 14:10:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.735 [2024-07-15 14:10:47.371495] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:49.735 [2024-07-15 14:10:47.371558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.735 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.735 [2024-07-15 14:10:47.450605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.735 [2024-07-15 14:10:47.523935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.735 [2024-07-15 14:10:47.523976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.735 [2024-07-15 14:10:47.523984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.735 [2024-07-15 14:10:47.523990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.735 [2024-07-15 14:10:47.523995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.735 [2024-07-15 14:10:47.524019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 [2024-07-15 14:10:48.199141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 null0 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 94bf51a4d67749f883c7f2af9973cfcc 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.309 [2024-07-15 14:10:48.239345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.309 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.570 nvme0n1 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 [ 00:24:50.571 { 00:24:50.571 "name": "nvme0n1", 00:24:50.571 "aliases": [ 00:24:50.571 "94bf51a4-d677-49f8-83c7-f2af9973cfcc" 00:24:50.571 ], 00:24:50.571 "product_name": "NVMe disk", 00:24:50.571 "block_size": 512, 00:24:50.571 "num_blocks": 2097152, 00:24:50.571 "uuid": "94bf51a4-d677-49f8-83c7-f2af9973cfcc", 00:24:50.571 "assigned_rate_limits": { 00:24:50.571 "rw_ios_per_sec": 0, 00:24:50.571 "rw_mbytes_per_sec": 0, 00:24:50.571 "r_mbytes_per_sec": 0, 00:24:50.571 "w_mbytes_per_sec": 0 00:24:50.571 }, 00:24:50.571 "claimed": false, 00:24:50.571 "zoned": false, 00:24:50.571 "supported_io_types": { 00:24:50.571 "read": true, 00:24:50.571 "write": true, 00:24:50.571 "unmap": false, 00:24:50.571 "flush": true, 00:24:50.571 "reset": true, 00:24:50.571 "nvme_admin": true, 00:24:50.571 "nvme_io": true, 00:24:50.571 "nvme_io_md": false, 00:24:50.571 "write_zeroes": true, 00:24:50.571 "zcopy": false, 00:24:50.571 "get_zone_info": false, 00:24:50.571 "zone_management": false, 00:24:50.571 "zone_append": false, 00:24:50.571 "compare": true, 00:24:50.571 "compare_and_write": true, 00:24:50.571 "abort": true, 00:24:50.571 "seek_hole": false, 00:24:50.571 "seek_data": false, 00:24:50.571 "copy": true, 00:24:50.571 "nvme_iov_md": false 00:24:50.571 }, 00:24:50.571 "memory_domains": [ 00:24:50.571 { 00:24:50.571 "dma_device_id": "system", 00:24:50.571 "dma_device_type": 1 00:24:50.571 } 00:24:50.571 ], 00:24:50.571 "driver_specific": { 00:24:50.571 "nvme": [ 00:24:50.571 { 00:24:50.571 "trid": { 00:24:50.571 "trtype": "TCP", 00:24:50.571 "adrfam": "IPv4", 00:24:50.571 "traddr": "10.0.0.2", 00:24:50.571 "trsvcid": "4420", 00:24:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.571 }, 00:24:50.571 "ctrlr_data": { 00:24:50.571 "cntlid": 1, 00:24:50.571 "vendor_id": "0x8086", 00:24:50.571 "model_number": "SPDK bdev Controller", 00:24:50.571 "serial_number": "00000000000000000000", 00:24:50.571 "firmware_revision": "24.09", 00:24:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.571 "oacs": { 00:24:50.571 "security": 0, 00:24:50.571 "format": 0, 00:24:50.571 "firmware": 0, 00:24:50.571 "ns_manage": 0 00:24:50.571 }, 00:24:50.571 "multi_ctrlr": true, 00:24:50.571 "ana_reporting": false 00:24:50.571 }, 00:24:50.571 "vs": { 00:24:50.571 "nvme_version": "1.3" 00:24:50.571 }, 00:24:50.571 "ns_data": { 00:24:50.571 "id": 1, 00:24:50.571 "can_share": true 00:24:50.571 } 00:24:50.571 } 00:24:50.571 ], 00:24:50.571 "mp_policy": "active_passive" 00:24:50.571 } 00:24:50.571 } 00:24:50.571 ] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 [2024-07-15 14:10:48.496150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.571 [2024-07-15 14:10:48.496210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c39f0 (9): Bad file descriptor 00:24:50.571 [2024-07-15 14:10:48.627852] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 [ 00:24:50.571 { 00:24:50.571 "name": "nvme0n1", 00:24:50.571 "aliases": [ 00:24:50.571 "94bf51a4-d677-49f8-83c7-f2af9973cfcc" 00:24:50.571 ], 00:24:50.571 "product_name": "NVMe disk", 00:24:50.571 "block_size": 512, 00:24:50.571 "num_blocks": 2097152, 00:24:50.571 "uuid": "94bf51a4-d677-49f8-83c7-f2af9973cfcc", 00:24:50.571 "assigned_rate_limits": { 00:24:50.571 "rw_ios_per_sec": 0, 00:24:50.571 "rw_mbytes_per_sec": 0, 00:24:50.571 "r_mbytes_per_sec": 0, 00:24:50.571 "w_mbytes_per_sec": 0 00:24:50.571 }, 00:24:50.571 "claimed": false, 00:24:50.571 "zoned": false, 00:24:50.571 "supported_io_types": { 00:24:50.571 "read": true, 00:24:50.571 "write": true, 00:24:50.571 "unmap": false, 00:24:50.571 "flush": true, 00:24:50.571 "reset": true, 00:24:50.571 "nvme_admin": true, 00:24:50.571 "nvme_io": true, 00:24:50.571 "nvme_io_md": false, 00:24:50.571 "write_zeroes": true, 00:24:50.571 "zcopy": false, 00:24:50.571 "get_zone_info": false, 00:24:50.571 "zone_management": false, 00:24:50.571 "zone_append": false, 00:24:50.571 "compare": true, 00:24:50.571 "compare_and_write": true, 00:24:50.571 "abort": true, 00:24:50.571 "seek_hole": false, 00:24:50.571 "seek_data": false, 00:24:50.571 "copy": true, 00:24:50.571 "nvme_iov_md": false 00:24:50.571 }, 00:24:50.571 "memory_domains": [ 00:24:50.571 { 00:24:50.571 "dma_device_id": "system", 00:24:50.571 "dma_device_type": 1 00:24:50.571 } 00:24:50.571 ], 00:24:50.571 "driver_specific": { 00:24:50.571 "nvme": [ 00:24:50.571 { 00:24:50.571 "trid": { 00:24:50.571 "trtype": "TCP", 00:24:50.571 "adrfam": "IPv4", 00:24:50.571 "traddr": "10.0.0.2", 00:24:50.571 "trsvcid": "4420", 00:24:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.571 }, 00:24:50.571 "ctrlr_data": { 00:24:50.571 "cntlid": 2, 00:24:50.571 "vendor_id": "0x8086", 00:24:50.571 "model_number": "SPDK bdev Controller", 00:24:50.571 "serial_number": "00000000000000000000", 00:24:50.571 "firmware_revision": "24.09", 00:24:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.571 "oacs": { 00:24:50.571 "security": 0, 00:24:50.571 "format": 0, 00:24:50.571 "firmware": 0, 00:24:50.571 "ns_manage": 0 00:24:50.571 }, 00:24:50.571 "multi_ctrlr": true, 00:24:50.571 "ana_reporting": false 00:24:50.571 }, 00:24:50.571 "vs": { 00:24:50.571 "nvme_version": "1.3" 00:24:50.571 }, 00:24:50.571 "ns_data": { 00:24:50.571 "id": 1, 00:24:50.571 "can_share": true 00:24:50.571 } 00:24:50.571 } 00:24:50.571 ], 00:24:50.571 "mp_policy": "active_passive" 00:24:50.571 } 00:24:50.571 } 00:24:50.571 ] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6N50pjFAkE 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6N50pjFAkE 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.571 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 [2024-07-15 14:10:48.688878] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.833 [2024-07-15 14:10:48.688998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6N50pjFAkE 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 [2024-07-15 14:10:48.696891] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6N50pjFAkE 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 [2024-07-15 14:10:48.704936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.833 [2024-07-15 14:10:48.704972] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:50.833 nvme0n1 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 [ 00:24:50.833 { 00:24:50.833 "name": "nvme0n1", 00:24:50.833 "aliases": [ 00:24:50.833 "94bf51a4-d677-49f8-83c7-f2af9973cfcc" 00:24:50.833 ], 00:24:50.833 "product_name": "NVMe disk", 00:24:50.833 "block_size": 512, 00:24:50.833 "num_blocks": 2097152, 00:24:50.833 "uuid": "94bf51a4-d677-49f8-83c7-f2af9973cfcc", 00:24:50.833 "assigned_rate_limits": { 00:24:50.833 "rw_ios_per_sec": 0, 00:24:50.833 "rw_mbytes_per_sec": 0, 00:24:50.833 "r_mbytes_per_sec": 0, 00:24:50.833 "w_mbytes_per_sec": 0 00:24:50.833 }, 00:24:50.833 "claimed": false, 00:24:50.833 "zoned": false, 00:24:50.833 "supported_io_types": { 00:24:50.833 "read": true, 00:24:50.833 "write": true, 00:24:50.833 "unmap": false, 00:24:50.833 "flush": true, 00:24:50.833 "reset": true, 00:24:50.833 "nvme_admin": true, 00:24:50.833 "nvme_io": true, 00:24:50.833 "nvme_io_md": false, 00:24:50.833 "write_zeroes": true, 00:24:50.833 "zcopy": false, 00:24:50.833 "get_zone_info": false, 00:24:50.833 "zone_management": false, 00:24:50.833 "zone_append": false, 00:24:50.833 "compare": true, 00:24:50.833 "compare_and_write": true, 00:24:50.833 "abort": true, 00:24:50.833 "seek_hole": false, 00:24:50.833 "seek_data": false, 00:24:50.833 "copy": true, 00:24:50.833 "nvme_iov_md": false 00:24:50.833 }, 00:24:50.833 "memory_domains": [ 00:24:50.833 { 00:24:50.833 "dma_device_id": "system", 00:24:50.833 "dma_device_type": 1 00:24:50.833 } 00:24:50.833 ], 00:24:50.833 "driver_specific": { 00:24:50.833 "nvme": [ 00:24:50.833 { 00:24:50.833 "trid": { 00:24:50.833 "trtype": "TCP", 00:24:50.833 "adrfam": "IPv4", 00:24:50.833 "traddr": "10.0.0.2", 00:24:50.833 "trsvcid": "4421", 00:24:50.833 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.833 }, 00:24:50.833 "ctrlr_data": { 00:24:50.833 "cntlid": 3, 00:24:50.833 "vendor_id": "0x8086", 00:24:50.833 "model_number": "SPDK bdev Controller", 00:24:50.833 "serial_number": "00000000000000000000", 00:24:50.833 "firmware_revision": "24.09", 00:24:50.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.833 "oacs": { 00:24:50.833 "security": 0, 00:24:50.833 "format": 0, 00:24:50.833 "firmware": 0, 00:24:50.833 "ns_manage": 0 00:24:50.833 }, 00:24:50.833 "multi_ctrlr": true, 00:24:50.833 "ana_reporting": false 00:24:50.833 }, 00:24:50.833 "vs": { 00:24:50.833 "nvme_version": "1.3" 00:24:50.833 }, 00:24:50.833 "ns_data": { 00:24:50.833 "id": 1, 00:24:50.833 "can_share": true 00:24:50.833 } 00:24:50.833 } 00:24:50.833 ], 00:24:50.833 "mp_policy": "active_passive" 00:24:50.833 } 00:24:50.833 } 00:24:50.833 ] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.833 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6N50pjFAkE 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.834 rmmod nvme_tcp 00:24:50.834 rmmod nvme_fabrics 00:24:50.834 rmmod nvme_keyring 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1468400 ']' 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1468400 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1468400 ']' 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1468400 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468400 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468400' 00:24:50.834 killing process with pid 1468400 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1468400 00:24:50.834 [2024-07-15 14:10:48.935842] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:50.834 [2024-07-15 14:10:48.935868] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:50.834 14:10:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1468400 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.095 14:10:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.643 14:10:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.643 00:24:53.643 real 0m11.967s 00:24:53.643 user 0m4.180s 00:24:53.643 sys 0m6.263s 00:24:53.643 14:10:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.643 14:10:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:53.643 ************************************ 00:24:53.643 END TEST nvmf_async_init 00:24:53.643 ************************************ 00:24:53.643 14:10:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.643 14:10:51 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:53.643 14:10:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.643 14:10:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.643 14:10:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.643 ************************************ 00:24:53.643 START TEST dma 00:24:53.643 ************************************ 00:24:53.643 14:10:51 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:53.643 * Looking for test storage... 00:24:53.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.644 14:10:51 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.644 14:10:51 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.644 14:10:51 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.644 14:10:51 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.644 14:10:51 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.644 14:10:51 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.644 14:10:51 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.644 14:10:51 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:53.644 14:10:51 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.644 14:10:51 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.644 14:10:51 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:53.644 14:10:51 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:53.644 00:24:53.644 real 0m0.121s 00:24:53.644 user 0m0.049s 00:24:53.644 sys 0m0.080s 00:24:53.644 14:10:51 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.644 14:10:51 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:53.644 ************************************ 00:24:53.644 END TEST dma 00:24:53.644 ************************************ 00:24:53.644 14:10:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.644 14:10:51 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:53.644 14:10:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.644 14:10:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.644 14:10:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.644 ************************************ 00:24:53.644 START TEST nvmf_identify 00:24:53.644 ************************************ 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:53.644 * Looking for test storage... 00:24:53.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.644 14:10:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.645 14:10:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.788 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:01.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:01.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:01.789 Found net devices under 0000:31:00.0: cvl_0_0 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:01.789 Found net devices under 0000:31:00.1: cvl_0_1 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:25:01.789 00:25:01.789 --- 10.0.0.2 ping statistics --- 00:25:01.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.789 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:25:01.789 00:25:01.789 --- 10.0.0.1 ping statistics --- 00:25:01.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.789 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1473472 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1473472 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1473472 ']' 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.789 14:10:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:01.789 [2024-07-15 14:10:59.679798] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:01.789 [2024-07-15 14:10:59.679866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.789 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.789 [2024-07-15 14:10:59.760511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.789 [2024-07-15 14:10:59.836209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.789 [2024-07-15 14:10:59.836249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.789 [2024-07-15 14:10:59.836256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.789 [2024-07-15 14:10:59.836262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.789 [2024-07-15 14:10:59.836268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.789 [2024-07-15 14:10:59.836413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.789 [2024-07-15 14:10:59.836552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.789 [2024-07-15 14:10:59.836709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.789 [2024-07-15 14:10:59.836710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.361 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.361 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:25:02.361 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.361 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.361 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.361 [2024-07-15 14:11:00.466254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 Malloc0 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 [2024-07-15 14:11:00.565451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.622 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.622 [ 00:25:02.622 { 00:25:02.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:02.622 "subtype": "Discovery", 00:25:02.622 "listen_addresses": [ 00:25:02.622 { 00:25:02.622 "trtype": "TCP", 00:25:02.622 "adrfam": "IPv4", 00:25:02.622 "traddr": "10.0.0.2", 00:25:02.622 "trsvcid": "4420" 00:25:02.622 } 00:25:02.622 ], 00:25:02.622 "allow_any_host": true, 00:25:02.622 "hosts": [] 00:25:02.622 }, 00:25:02.622 { 00:25:02.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.622 "subtype": "NVMe", 00:25:02.622 "listen_addresses": [ 00:25:02.622 { 00:25:02.622 "trtype": "TCP", 00:25:02.622 "adrfam": "IPv4", 00:25:02.622 "traddr": "10.0.0.2", 00:25:02.622 "trsvcid": "4420" 00:25:02.622 } 00:25:02.622 ], 00:25:02.622 "allow_any_host": true, 00:25:02.622 "hosts": [], 00:25:02.622 "serial_number": "SPDK00000000000001", 00:25:02.622 "model_number": "SPDK bdev Controller", 00:25:02.622 "max_namespaces": 32, 00:25:02.622 "min_cntlid": 1, 00:25:02.622 "max_cntlid": 65519, 00:25:02.622 "namespaces": [ 00:25:02.622 { 00:25:02.622 "nsid": 1, 00:25:02.622 "bdev_name": "Malloc0", 00:25:02.622 "name": "Malloc0", 00:25:02.622 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:02.622 "eui64": "ABCDEF0123456789", 00:25:02.622 "uuid": "8878b2c7-978e-4eae-9846-e38cb283f8fc" 00:25:02.622 } 00:25:02.622 ] 00:25:02.622 } 00:25:02.622 ] 00:25:02.623 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.623 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:02.623 [2024-07-15 14:11:00.627666] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:02.623 [2024-07-15 14:11:00.627706] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473529 ] 00:25:02.623 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.623 [2024-07-15 14:11:00.660431] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:02.623 [2024-07-15 14:11:00.660485] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:02.623 [2024-07-15 14:11:00.660490] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:02.623 [2024-07-15 14:11:00.660502] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:02.623 [2024-07-15 14:11:00.660507] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:02.623 [2024-07-15 14:11:00.663783] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:02.623 [2024-07-15 14:11:00.663813] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1868ec0 0 00:25:02.623 [2024-07-15 14:11:00.671761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:02.623 [2024-07-15 14:11:00.671774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:02.623 [2024-07-15 14:11:00.671779] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:02.623 [2024-07-15 14:11:00.671782] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:02.623 [2024-07-15 14:11:00.671818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.671824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.671828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.671841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:02.623 [2024-07-15 14:11:00.671857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.679762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.623 [2024-07-15 14:11:00.679772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.623 [2024-07-15 14:11:00.679776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.679780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.623 [2024-07-15 14:11:00.679793] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:02.623 [2024-07-15 14:11:00.679801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:02.623 [2024-07-15 14:11:00.679806] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:02.623 [2024-07-15 14:11:00.679820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.679824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.679827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.679835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.623 [2024-07-15 14:11:00.679847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.680047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.623 [2024-07-15 14:11:00.680054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.623 [2024-07-15 14:11:00.680057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.623 [2024-07-15 14:11:00.680067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:02.623 [2024-07-15 14:11:00.680075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:02.623 [2024-07-15 14:11:00.680081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.680095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.623 [2024-07-15 14:11:00.680105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.680306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.623 [2024-07-15 14:11:00.680312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.623 [2024-07-15 14:11:00.680316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.623 [2024-07-15 14:11:00.680327] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:02.623 [2024-07-15 14:11:00.680335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:02.623 [2024-07-15 14:11:00.680341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.680355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.623 [2024-07-15 14:11:00.680365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.680542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.623 [2024-07-15 14:11:00.680548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.623 [2024-07-15 14:11:00.680552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.623 [2024-07-15 14:11:00.680561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:02.623 [2024-07-15 14:11:00.680570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.680584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.623 [2024-07-15 14:11:00.680593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.680775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.623 [2024-07-15 14:11:00.680782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.623 [2024-07-15 14:11:00.680785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.623 [2024-07-15 14:11:00.680794] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:02.623 [2024-07-15 14:11:00.680799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:02.623 [2024-07-15 14:11:00.680806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:02.623 [2024-07-15 14:11:00.680911] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:02.623 [2024-07-15 14:11:00.680916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:02.623 [2024-07-15 14:11:00.680925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.623 [2024-07-15 14:11:00.680932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.623 [2024-07-15 14:11:00.680939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.623 [2024-07-15 14:11:00.680949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.623 [2024-07-15 14:11:00.681118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.624 [2024-07-15 14:11:00.681124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.624 [2024-07-15 14:11:00.681130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.624 [2024-07-15 14:11:00.681138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:02.624 [2024-07-15 14:11:00.681147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.681161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.624 [2024-07-15 14:11:00.681171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.624 [2024-07-15 14:11:00.681369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.624 [2024-07-15 14:11:00.681375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.624 [2024-07-15 14:11:00.681379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.624 [2024-07-15 14:11:00.681387] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:02.624 [2024-07-15 14:11:00.681391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.681399] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:02.624 [2024-07-15 14:11:00.681411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.681420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.681430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.624 [2024-07-15 14:11:00.681440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.624 [2024-07-15 14:11:00.681636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.624 [2024-07-15 14:11:00.681642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.624 [2024-07-15 14:11:00.681646] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681650] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ec0): datao=0, datal=4096, cccid=0 00:25:02.624 [2024-07-15 14:11:00.681655] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ebe40) on tqpair(0x1868ec0): expected_datao=0, payload_size=4096 00:25:02.624 [2024-07-15 14:11:00.681660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681709] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.681714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.725761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.624 [2024-07-15 14:11:00.725771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.624 [2024-07-15 14:11:00.725774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.725778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.624 [2024-07-15 14:11:00.725786] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:02.624 [2024-07-15 14:11:00.725796] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:02.624 [2024-07-15 14:11:00.725801] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:02.624 [2024-07-15 14:11:00.725806] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:02.624 [2024-07-15 14:11:00.725811] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:02.624 [2024-07-15 14:11:00.725815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.725824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.725831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.725835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.725838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.725846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.624 [2024-07-15 14:11:00.725859] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.624 [2024-07-15 14:11:00.726037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.624 [2024-07-15 14:11:00.726044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.624 [2024-07-15 14:11:00.726048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.624 [2024-07-15 14:11:00.726060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.726074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.624 [2024-07-15 14:11:00.726080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.726093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.624 [2024-07-15 14:11:00.726099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.726112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.624 [2024-07-15 14:11:00.726118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.726131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.624 [2024-07-15 14:11:00.726136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.726147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:02.624 [2024-07-15 14:11:00.726155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ec0) 00:25:02.624 [2024-07-15 14:11:00.726166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.624 [2024-07-15 14:11:00.726177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebe40, cid 0, qid 0 00:25:02.624 [2024-07-15 14:11:00.726182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ebfc0, cid 1, qid 0 00:25:02.624 [2024-07-15 14:11:00.726187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec140, cid 2, qid 0 00:25:02.624 [2024-07-15 14:11:00.726192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.624 [2024-07-15 14:11:00.726197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec440, cid 4, qid 0 00:25:02.624 [2024-07-15 14:11:00.726433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.624 [2024-07-15 14:11:00.726440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.624 [2024-07-15 14:11:00.726444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec440) on tqpair=0x1868ec0 00:25:02.624 [2024-07-15 14:11:00.726453] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:02.624 [2024-07-15 14:11:00.726458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:02.624 [2024-07-15 14:11:00.726469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.624 [2024-07-15 14:11:00.726473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ec0) 00:25:02.625 [2024-07-15 14:11:00.726479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.625 [2024-07-15 14:11:00.726489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec440, cid 4, qid 0 00:25:02.625 [2024-07-15 14:11:00.726716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.625 [2024-07-15 14:11:00.726723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.625 [2024-07-15 14:11:00.726727] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726730] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ec0): datao=0, datal=4096, cccid=4 00:25:02.625 [2024-07-15 14:11:00.726735] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ec440) on tqpair(0x1868ec0): expected_datao=0, payload_size=4096 00:25:02.625 [2024-07-15 14:11:00.726739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726745] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726749] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.625 [2024-07-15 14:11:00.726932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.625 [2024-07-15 14:11:00.726936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec440) on tqpair=0x1868ec0 00:25:02.625 [2024-07-15 14:11:00.726951] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:02.625 [2024-07-15 14:11:00.726974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ec0) 00:25:02.625 [2024-07-15 14:11:00.726985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.625 [2024-07-15 14:11:00.726994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.726998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1868ec0) 00:25:02.625 [2024-07-15 14:11:00.727007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.625 [2024-07-15 14:11:00.727021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec440, cid 4, qid 0 00:25:02.625 [2024-07-15 14:11:00.727026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec5c0, cid 5, qid 0 00:25:02.625 [2024-07-15 14:11:00.727298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.625 [2024-07-15 14:11:00.727304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.625 [2024-07-15 14:11:00.727308] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ec0): datao=0, datal=1024, cccid=4 00:25:02.625 [2024-07-15 14:11:00.727315] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ec440) on tqpair(0x1868ec0): expected_datao=0, payload_size=1024 00:25:02.625 [2024-07-15 14:11:00.727320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727327] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.625 [2024-07-15 14:11:00.727341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.625 [2024-07-15 14:11:00.727345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.625 [2024-07-15 14:11:00.727348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec5c0) on tqpair=0x1868ec0 00:25:02.890 [2024-07-15 14:11:00.767902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.890 [2024-07-15 14:11:00.767912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.890 [2024-07-15 14:11:00.767916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.767920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec440) on tqpair=0x1868ec0 00:25:02.890 [2024-07-15 14:11:00.767936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.767940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ec0) 00:25:02.890 [2024-07-15 14:11:00.767947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.890 [2024-07-15 14:11:00.767962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec440, cid 4, qid 0 00:25:02.890 [2024-07-15 14:11:00.768234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.890 [2024-07-15 14:11:00.768241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.890 [2024-07-15 14:11:00.768245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.768249] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ec0): datao=0, datal=3072, cccid=4 00:25:02.890 [2024-07-15 14:11:00.768253] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ec440) on tqpair(0x1868ec0): expected_datao=0, payload_size=3072 00:25:02.890 [2024-07-15 14:11:00.768258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.768277] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.768281] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.810760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.890 [2024-07-15 14:11:00.810768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.890 [2024-07-15 14:11:00.810775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.810779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec440) on tqpair=0x1868ec0 00:25:02.890 [2024-07-15 14:11:00.810788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.810792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ec0) 00:25:02.890 [2024-07-15 14:11:00.810798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.890 [2024-07-15 14:11:00.810812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec440, cid 4, qid 0 00:25:02.890 [2024-07-15 14:11:00.810972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.890 [2024-07-15 14:11:00.810978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.890 [2024-07-15 14:11:00.810982] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.810985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ec0): datao=0, datal=8, cccid=4 00:25:02.890 [2024-07-15 14:11:00.810990] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18ec440) on tqpair(0x1868ec0): expected_datao=0, payload_size=8 00:25:02.890 [2024-07-15 14:11:00.810994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.811000] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.811004] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.852763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.890 [2024-07-15 14:11:00.852772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.890 [2024-07-15 14:11:00.852776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.890 [2024-07-15 14:11:00.852779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec440) on tqpair=0x1868ec0 00:25:02.890 ===================================================== 00:25:02.890 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:02.890 ===================================================== 00:25:02.890 Controller Capabilities/Features 00:25:02.890 ================================ 00:25:02.890 Vendor ID: 0000 00:25:02.890 Subsystem Vendor ID: 0000 00:25:02.890 Serial Number: .................... 00:25:02.890 Model Number: ........................................ 00:25:02.890 Firmware Version: 24.09 00:25:02.890 Recommended Arb Burst: 0 00:25:02.890 IEEE OUI Identifier: 00 00 00 00:25:02.890 Multi-path I/O 00:25:02.890 May have multiple subsystem ports: No 00:25:02.890 May have multiple controllers: No 00:25:02.890 Associated with SR-IOV VF: No 00:25:02.890 Max Data Transfer Size: 131072 00:25:02.890 Max Number of Namespaces: 0 00:25:02.890 Max Number of I/O Queues: 1024 00:25:02.890 NVMe Specification Version (VS): 1.3 00:25:02.890 NVMe Specification Version (Identify): 1.3 00:25:02.890 Maximum Queue Entries: 128 00:25:02.890 Contiguous Queues Required: Yes 00:25:02.890 Arbitration Mechanisms Supported 00:25:02.890 Weighted Round Robin: Not Supported 00:25:02.890 Vendor Specific: Not Supported 00:25:02.890 Reset Timeout: 15000 ms 00:25:02.890 Doorbell Stride: 4 bytes 00:25:02.890 NVM Subsystem Reset: Not Supported 00:25:02.890 Command Sets Supported 00:25:02.890 NVM Command Set: Supported 00:25:02.890 Boot Partition: Not Supported 00:25:02.890 Memory Page Size Minimum: 4096 bytes 00:25:02.890 Memory Page Size Maximum: 4096 bytes 00:25:02.890 Persistent Memory Region: Not Supported 00:25:02.890 Optional Asynchronous Events Supported 00:25:02.890 Namespace Attribute Notices: Not Supported 00:25:02.890 Firmware Activation Notices: Not Supported 00:25:02.890 ANA Change Notices: Not Supported 00:25:02.890 PLE Aggregate Log Change Notices: Not Supported 00:25:02.890 LBA Status Info Alert Notices: Not Supported 00:25:02.890 EGE Aggregate Log Change Notices: Not Supported 00:25:02.890 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.890 Zone Descriptor Change Notices: Not Supported 00:25:02.890 Discovery Log Change Notices: Supported 00:25:02.890 Controller Attributes 00:25:02.890 128-bit Host Identifier: Not Supported 00:25:02.890 Non-Operational Permissive Mode: Not Supported 00:25:02.890 NVM Sets: Not Supported 00:25:02.890 Read Recovery Levels: Not Supported 00:25:02.891 Endurance Groups: Not Supported 00:25:02.891 Predictable Latency Mode: Not Supported 00:25:02.891 Traffic Based Keep ALive: Not Supported 00:25:02.891 Namespace Granularity: Not Supported 00:25:02.891 SQ Associations: Not Supported 00:25:02.891 UUID List: Not Supported 00:25:02.891 Multi-Domain Subsystem: Not Supported 00:25:02.891 Fixed Capacity Management: Not Supported 00:25:02.891 Variable Capacity Management: Not Supported 00:25:02.891 Delete Endurance Group: Not Supported 00:25:02.891 Delete NVM Set: Not Supported 00:25:02.891 Extended LBA Formats Supported: Not Supported 00:25:02.891 Flexible Data Placement Supported: Not Supported 00:25:02.891 00:25:02.891 Controller Memory Buffer Support 00:25:02.891 ================================ 00:25:02.891 Supported: No 00:25:02.891 00:25:02.891 Persistent Memory Region Support 00:25:02.891 ================================ 00:25:02.891 Supported: No 00:25:02.891 00:25:02.891 Admin Command Set Attributes 00:25:02.891 ============================ 00:25:02.891 Security Send/Receive: Not Supported 00:25:02.891 Format NVM: Not Supported 00:25:02.891 Firmware Activate/Download: Not Supported 00:25:02.891 Namespace Management: Not Supported 00:25:02.891 Device Self-Test: Not Supported 00:25:02.891 Directives: Not Supported 00:25:02.891 NVMe-MI: Not Supported 00:25:02.891 Virtualization Management: Not Supported 00:25:02.891 Doorbell Buffer Config: Not Supported 00:25:02.891 Get LBA Status Capability: Not Supported 00:25:02.891 Command & Feature Lockdown Capability: Not Supported 00:25:02.891 Abort Command Limit: 1 00:25:02.891 Async Event Request Limit: 4 00:25:02.891 Number of Firmware Slots: N/A 00:25:02.891 Firmware Slot 1 Read-Only: N/A 00:25:02.891 Firmware Activation Without Reset: N/A 00:25:02.891 Multiple Update Detection Support: N/A 00:25:02.891 Firmware Update Granularity: No Information Provided 00:25:02.891 Per-Namespace SMART Log: No 00:25:02.891 Asymmetric Namespace Access Log Page: Not Supported 00:25:02.891 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:02.891 Command Effects Log Page: Not Supported 00:25:02.891 Get Log Page Extended Data: Supported 00:25:02.891 Telemetry Log Pages: Not Supported 00:25:02.891 Persistent Event Log Pages: Not Supported 00:25:02.891 Supported Log Pages Log Page: May Support 00:25:02.891 Commands Supported & Effects Log Page: Not Supported 00:25:02.891 Feature Identifiers & Effects Log Page:May Support 00:25:02.891 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.891 Data Area 4 for Telemetry Log: Not Supported 00:25:02.891 Error Log Page Entries Supported: 128 00:25:02.891 Keep Alive: Not Supported 00:25:02.891 00:25:02.891 NVM Command Set Attributes 00:25:02.891 ========================== 00:25:02.891 Submission Queue Entry Size 00:25:02.891 Max: 1 00:25:02.891 Min: 1 00:25:02.891 Completion Queue Entry Size 00:25:02.891 Max: 1 00:25:02.891 Min: 1 00:25:02.891 Number of Namespaces: 0 00:25:02.891 Compare Command: Not Supported 00:25:02.891 Write Uncorrectable Command: Not Supported 00:25:02.891 Dataset Management Command: Not Supported 00:25:02.891 Write Zeroes Command: Not Supported 00:25:02.891 Set Features Save Field: Not Supported 00:25:02.891 Reservations: Not Supported 00:25:02.891 Timestamp: Not Supported 00:25:02.891 Copy: Not Supported 00:25:02.891 Volatile Write Cache: Not Present 00:25:02.891 Atomic Write Unit (Normal): 1 00:25:02.891 Atomic Write Unit (PFail): 1 00:25:02.891 Atomic Compare & Write Unit: 1 00:25:02.891 Fused Compare & Write: Supported 00:25:02.891 Scatter-Gather List 00:25:02.891 SGL Command Set: Supported 00:25:02.891 SGL Keyed: Supported 00:25:02.891 SGL Bit Bucket Descriptor: Not Supported 00:25:02.891 SGL Metadata Pointer: Not Supported 00:25:02.891 Oversized SGL: Not Supported 00:25:02.891 SGL Metadata Address: Not Supported 00:25:02.891 SGL Offset: Supported 00:25:02.891 Transport SGL Data Block: Not Supported 00:25:02.891 Replay Protected Memory Block: Not Supported 00:25:02.891 00:25:02.891 Firmware Slot Information 00:25:02.891 ========================= 00:25:02.891 Active slot: 0 00:25:02.891 00:25:02.891 00:25:02.891 Error Log 00:25:02.891 ========= 00:25:02.891 00:25:02.891 Active Namespaces 00:25:02.891 ================= 00:25:02.891 Discovery Log Page 00:25:02.891 ================== 00:25:02.891 Generation Counter: 2 00:25:02.891 Number of Records: 2 00:25:02.891 Record Format: 0 00:25:02.891 00:25:02.891 Discovery Log Entry 0 00:25:02.891 ---------------------- 00:25:02.891 Transport Type: 3 (TCP) 00:25:02.891 Address Family: 1 (IPv4) 00:25:02.891 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:02.891 Entry Flags: 00:25:02.891 Duplicate Returned Information: 1 00:25:02.891 Explicit Persistent Connection Support for Discovery: 1 00:25:02.891 Transport Requirements: 00:25:02.891 Secure Channel: Not Required 00:25:02.891 Port ID: 0 (0x0000) 00:25:02.891 Controller ID: 65535 (0xffff) 00:25:02.891 Admin Max SQ Size: 128 00:25:02.891 Transport Service Identifier: 4420 00:25:02.891 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:02.891 Transport Address: 10.0.0.2 00:25:02.891 Discovery Log Entry 1 00:25:02.891 ---------------------- 00:25:02.891 Transport Type: 3 (TCP) 00:25:02.891 Address Family: 1 (IPv4) 00:25:02.891 Subsystem Type: 2 (NVM Subsystem) 00:25:02.891 Entry Flags: 00:25:02.891 Duplicate Returned Information: 0 00:25:02.891 Explicit Persistent Connection Support for Discovery: 0 00:25:02.891 Transport Requirements: 00:25:02.891 Secure Channel: Not Required 00:25:02.891 Port ID: 0 (0x0000) 00:25:02.891 Controller ID: 65535 (0xffff) 00:25:02.891 Admin Max SQ Size: 128 00:25:02.891 Transport Service Identifier: 4420 00:25:02.891 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:02.891 Transport Address: 10.0.0.2 [2024-07-15 14:11:00.852872] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:02.891 [2024-07-15 14:11:00.852883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebe40) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.852890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.891 [2024-07-15 14:11:00.852896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ebfc0) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.891 [2024-07-15 14:11:00.852905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec140) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.852909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.891 [2024-07-15 14:11:00.852914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.852919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.891 [2024-07-15 14:11:00.852929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.852933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.852937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.891 [2024-07-15 14:11:00.852944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.891 [2024-07-15 14:11:00.852957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.891 [2024-07-15 14:11:00.853232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.891 [2024-07-15 14:11:00.853239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.891 [2024-07-15 14:11:00.853244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.853255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.891 [2024-07-15 14:11:00.853269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.891 [2024-07-15 14:11:00.853282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.891 [2024-07-15 14:11:00.853531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.891 [2024-07-15 14:11:00.853538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.891 [2024-07-15 14:11:00.853541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.891 [2024-07-15 14:11:00.853550] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:02.891 [2024-07-15 14:11:00.853554] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:02.891 [2024-07-15 14:11:00.853563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.891 [2024-07-15 14:11:00.853570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.891 [2024-07-15 14:11:00.853577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.891 [2024-07-15 14:11:00.853587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.853781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.853788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.853792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.853795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.853805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.853809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.853812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.853819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.853829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.854008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.854014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.854017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.854030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.854044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.854054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.854287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.854293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.854297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.854310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.854323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.854333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.854539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.854546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.854550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.854563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.854576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.854586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.854791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.854798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.854801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.854814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.854821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.854828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.854838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.855044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.855051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.855054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.855067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.855081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.855091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.855295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.855303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.855307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.855320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.855334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.855344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.855545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.855551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.855554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.855567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.855581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.855591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.855798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.855804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.855808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.855821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.855828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.855835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.855844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.856023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.856029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.856033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.856046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.856060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.856069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.856303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.856311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.856314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.856331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.856345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.856355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.856604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.856611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.856614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.856627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.856634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ec0) 00:25:02.892 [2024-07-15 14:11:00.856641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.892 [2024-07-15 14:11:00.856650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18ec2c0, cid 3, qid 0 00:25:02.892 [2024-07-15 14:11:00.860760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.892 [2024-07-15 14:11:00.860769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.892 [2024-07-15 14:11:00.860772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.892 [2024-07-15 14:11:00.860776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18ec2c0) on tqpair=0x1868ec0 00:25:02.892 [2024-07-15 14:11:00.860783] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:02.893 00:25:02.893 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:02.893 [2024-07-15 14:11:00.899207] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:02.893 [2024-07-15 14:11:00.899252] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473660 ] 00:25:02.893 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.893 [2024-07-15 14:11:00.931315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:02.893 [2024-07-15 14:11:00.931361] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:02.893 [2024-07-15 14:11:00.931366] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:02.893 [2024-07-15 14:11:00.931377] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:02.893 [2024-07-15 14:11:00.931383] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:02.893 [2024-07-15 14:11:00.934778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:02.893 [2024-07-15 14:11:00.934807] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19dfec0 0 00:25:02.893 [2024-07-15 14:11:00.942761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:02.893 [2024-07-15 14:11:00.942771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:02.893 [2024-07-15 14:11:00.942775] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:02.893 [2024-07-15 14:11:00.942778] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:02.893 [2024-07-15 14:11:00.942810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.942815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.942819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.942831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:02.893 [2024-07-15 14:11:00.942846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.950763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.950772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.950776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.950780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.950791] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:02.893 [2024-07-15 14:11:00.950798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:02.893 [2024-07-15 14:11:00.950803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:02.893 [2024-07-15 14:11:00.950815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.950819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.950823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.950830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.950843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.951045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.951052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.951055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.951064] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:02.893 [2024-07-15 14:11:00.951071] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:02.893 [2024-07-15 14:11:00.951078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.951092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.951102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.951302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.951310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.951313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.951325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:02.893 [2024-07-15 14:11:00.951333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.951340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.951353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.951363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.951564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.951571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.951574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.951583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.951592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.951606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.951616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.951840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.951846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.951850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.951858] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:02.893 [2024-07-15 14:11:00.951863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.951870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.951975] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:02.893 [2024-07-15 14:11:00.951979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.951986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.951994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.952000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.952011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.952203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.952209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.952215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.952224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:02.893 [2024-07-15 14:11:00.952233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.952247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.952256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.952437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.893 [2024-07-15 14:11:00.952443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.893 [2024-07-15 14:11:00.952447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.893 [2024-07-15 14:11:00.952455] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:02.893 [2024-07-15 14:11:00.952459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:02.893 [2024-07-15 14:11:00.952467] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:02.893 [2024-07-15 14:11:00.952478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:02.893 [2024-07-15 14:11:00.952487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.893 [2024-07-15 14:11:00.952497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.893 [2024-07-15 14:11:00.952507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.893 [2024-07-15 14:11:00.952689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.893 [2024-07-15 14:11:00.952696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.893 [2024-07-15 14:11:00.952700] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.893 [2024-07-15 14:11:00.952703] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=4096, cccid=0 00:25:02.894 [2024-07-15 14:11:00.952708] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a62e40) on tqpair(0x19dfec0): expected_datao=0, payload_size=4096 00:25:02.894 [2024-07-15 14:11:00.952712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952719] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952723] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.894 [2024-07-15 14:11:00.952894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.894 [2024-07-15 14:11:00.952897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.894 [2024-07-15 14:11:00.952908] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:02.894 [2024-07-15 14:11:00.952915] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:02.894 [2024-07-15 14:11:00.952921] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:02.894 [2024-07-15 14:11:00.952925] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:02.894 [2024-07-15 14:11:00.952930] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:02.894 [2024-07-15 14:11:00.952934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.952942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.952949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.952956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.952963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.894 [2024-07-15 14:11:00.952974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.894 [2024-07-15 14:11:00.953159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.894 [2024-07-15 14:11:00.953166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.894 [2024-07-15 14:11:00.953169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.894 [2024-07-15 14:11:00.953179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.894 [2024-07-15 14:11:00.953199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.894 [2024-07-15 14:11:00.953218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.894 [2024-07-15 14:11:00.953237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.894 [2024-07-15 14:11:00.953254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.894 [2024-07-15 14:11:00.953295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62e40, cid 0, qid 0 00:25:02.894 [2024-07-15 14:11:00.953300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a62fc0, cid 1, qid 0 00:25:02.894 [2024-07-15 14:11:00.953305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63140, cid 2, qid 0 00:25:02.894 [2024-07-15 14:11:00.953310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.894 [2024-07-15 14:11:00.953314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.894 [2024-07-15 14:11:00.953497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.894 [2024-07-15 14:11:00.953504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.894 [2024-07-15 14:11:00.953507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.894 [2024-07-15 14:11:00.953516] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:02.894 [2024-07-15 14:11:00.953520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.894 [2024-07-15 14:11:00.953564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.894 [2024-07-15 14:11:00.953763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.894 [2024-07-15 14:11:00.953770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.894 [2024-07-15 14:11:00.953774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.894 [2024-07-15 14:11:00.953839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:02.894 [2024-07-15 14:11:00.953855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.953859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.894 [2024-07-15 14:11:00.953865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.894 [2024-07-15 14:11:00.953875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.894 [2024-07-15 14:11:00.954049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.894 [2024-07-15 14:11:00.954056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.894 [2024-07-15 14:11:00.954061] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.954065] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=4096, cccid=4 00:25:02.894 [2024-07-15 14:11:00.954069] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a63440) on tqpair(0x19dfec0): expected_datao=0, payload_size=4096 00:25:02.894 [2024-07-15 14:11:00.954074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.954080] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.954084] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.954262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.894 [2024-07-15 14:11:00.954268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.894 [2024-07-15 14:11:00.954272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.894 [2024-07-15 14:11:00.954275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.894 [2024-07-15 14:11:00.954284] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:02.894 [2024-07-15 14:11:00.954293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.954302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.954309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.954313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.954319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.954329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.895 [2024-07-15 14:11:00.954558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.895 [2024-07-15 14:11:00.954564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.895 [2024-07-15 14:11:00.954567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.954571] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=4096, cccid=4 00:25:02.895 [2024-07-15 14:11:00.954575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a63440) on tqpair(0x19dfec0): expected_datao=0, payload_size=4096 00:25:02.895 [2024-07-15 14:11:00.954579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.954586] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.954590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.954744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.954750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.958760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.958764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.958777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.958786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.958794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.958797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.958804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.958819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.895 [2024-07-15 14:11:00.958988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.895 [2024-07-15 14:11:00.958994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.895 [2024-07-15 14:11:00.958998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=4096, cccid=4 00:25:02.895 [2024-07-15 14:11:00.959005] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a63440) on tqpair(0x19dfec0): expected_datao=0, payload_size=4096 00:25:02.895 [2024-07-15 14:11:00.959010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959034] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.959240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.959243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.959254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959292] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:02.895 [2024-07-15 14:11:00.959296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:02.895 [2024-07-15 14:11:00.959301] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:02.895 [2024-07-15 14:11:00.959314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.959325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.959331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.959344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.895 [2024-07-15 14:11:00.959357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.895 [2024-07-15 14:11:00.959362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a635c0, cid 5, qid 0 00:25:02.895 [2024-07-15 14:11:00.959542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.959575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.959579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.959589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.959595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.959598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a635c0) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.959611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.959621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.959630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a635c0, cid 5, qid 0 00:25:02.895 [2024-07-15 14:11:00.959826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.959833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.959837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a635c0) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.959849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.959853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.959859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.959869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a635c0, cid 5, qid 0 00:25:02.895 [2024-07-15 14:11:00.960083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.960089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.960093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a635c0) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.960105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.960115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.960125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a635c0, cid 5, qid 0 00:25:02.895 [2024-07-15 14:11:00.960357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.895 [2024-07-15 14:11:00.960363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.895 [2024-07-15 14:11:00.960367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a635c0) on tqpair=0x19dfec0 00:25:02.895 [2024-07-15 14:11:00.960384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.960394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.960401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.960413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.960420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.960430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.960437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19dfec0) 00:25:02.895 [2024-07-15 14:11:00.960447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.895 [2024-07-15 14:11:00.960458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a635c0, cid 5, qid 0 00:25:02.895 [2024-07-15 14:11:00.960463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63440, cid 4, qid 0 00:25:02.895 [2024-07-15 14:11:00.960467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a63740, cid 6, qid 0 00:25:02.895 [2024-07-15 14:11:00.960472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a638c0, cid 7, qid 0 00:25:02.895 [2024-07-15 14:11:00.960729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.895 [2024-07-15 14:11:00.960735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.895 [2024-07-15 14:11:00.960738] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.895 [2024-07-15 14:11:00.960742] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=8192, cccid=5 00:25:02.895 [2024-07-15 14:11:00.960746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a635c0) on tqpair(0x19dfec0): expected_datao=0, payload_size=8192 00:25:02.896 [2024-07-15 14:11:00.960757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960825] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960830] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.896 [2024-07-15 14:11:00.960841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.896 [2024-07-15 14:11:00.960845] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=512, cccid=4 00:25:02.896 [2024-07-15 14:11:00.960852] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a63440) on tqpair(0x19dfec0): expected_datao=0, payload_size=512 00:25:02.896 [2024-07-15 14:11:00.960857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960863] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960867] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.896 [2024-07-15 14:11:00.960878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.896 [2024-07-15 14:11:00.960881] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960885] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=512, cccid=6 00:25:02.896 [2024-07-15 14:11:00.960889] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a63740) on tqpair(0x19dfec0): expected_datao=0, payload_size=512 00:25:02.896 [2024-07-15 14:11:00.960893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960904] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.896 [2024-07-15 14:11:00.960916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.896 [2024-07-15 14:11:00.960919] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960923] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19dfec0): datao=0, datal=4096, cccid=7 00:25:02.896 [2024-07-15 14:11:00.960927] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a638c0) on tqpair(0x19dfec0): expected_datao=0, payload_size=4096 00:25:02.896 [2024-07-15 14:11:00.960931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960943] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.960946] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.961128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.896 [2024-07-15 14:11:00.961134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.896 [2024-07-15 14:11:00.961138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.961142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a635c0) on tqpair=0x19dfec0 00:25:02.896 [2024-07-15 14:11:00.961153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.896 [2024-07-15 14:11:00.961159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.896 [2024-07-15 14:11:00.961163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.961166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63440) on tqpair=0x19dfec0 00:25:02.896 [2024-07-15 14:11:00.961176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.896 [2024-07-15 14:11:00.961182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.896 [2024-07-15 14:11:00.961185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.961189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63740) on tqpair=0x19dfec0 00:25:02.896 [2024-07-15 14:11:00.961196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.896 [2024-07-15 14:11:00.961202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.896 [2024-07-15 14:11:00.961205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.896 [2024-07-15 14:11:00.961209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a638c0) on tqpair=0x19dfec0 00:25:02.896 ===================================================== 00:25:02.896 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.896 ===================================================== 00:25:02.896 Controller Capabilities/Features 00:25:02.896 ================================ 00:25:02.896 Vendor ID: 8086 00:25:02.896 Subsystem Vendor ID: 8086 00:25:02.896 Serial Number: SPDK00000000000001 00:25:02.896 Model Number: SPDK bdev Controller 00:25:02.896 Firmware Version: 24.09 00:25:02.896 Recommended Arb Burst: 6 00:25:02.896 IEEE OUI Identifier: e4 d2 5c 00:25:02.896 Multi-path I/O 00:25:02.896 May have multiple subsystem ports: Yes 00:25:02.896 May have multiple controllers: Yes 00:25:02.896 Associated with SR-IOV VF: No 00:25:02.896 Max Data Transfer Size: 131072 00:25:02.896 Max Number of Namespaces: 32 00:25:02.896 Max Number of I/O Queues: 127 00:25:02.896 NVMe Specification Version (VS): 1.3 00:25:02.896 NVMe Specification Version (Identify): 1.3 00:25:02.896 Maximum Queue Entries: 128 00:25:02.896 Contiguous Queues Required: Yes 00:25:02.896 Arbitration Mechanisms Supported 00:25:02.896 Weighted Round Robin: Not Supported 00:25:02.896 Vendor Specific: Not Supported 00:25:02.896 Reset Timeout: 15000 ms 00:25:02.896 Doorbell Stride: 4 bytes 00:25:02.896 NVM Subsystem Reset: Not Supported 00:25:02.896 Command Sets Supported 00:25:02.896 NVM Command Set: Supported 00:25:02.896 Boot Partition: Not Supported 00:25:02.896 Memory Page Size Minimum: 4096 bytes 00:25:02.896 Memory Page Size Maximum: 4096 bytes 00:25:02.896 Persistent Memory Region: Not Supported 00:25:02.896 Optional Asynchronous Events Supported 00:25:02.896 Namespace Attribute Notices: Supported 00:25:02.896 Firmware Activation Notices: Not Supported 00:25:02.896 ANA Change Notices: Not Supported 00:25:02.896 PLE Aggregate Log Change Notices: Not Supported 00:25:02.896 LBA Status Info Alert Notices: Not Supported 00:25:02.896 EGE Aggregate Log Change Notices: Not Supported 00:25:02.896 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.896 Zone Descriptor Change Notices: Not Supported 00:25:02.896 Discovery Log Change Notices: Not Supported 00:25:02.896 Controller Attributes 00:25:02.896 128-bit Host Identifier: Supported 00:25:02.896 Non-Operational Permissive Mode: Not Supported 00:25:02.896 NVM Sets: Not Supported 00:25:02.896 Read Recovery Levels: Not Supported 00:25:02.896 Endurance Groups: Not Supported 00:25:02.896 Predictable Latency Mode: Not Supported 00:25:02.896 Traffic Based Keep ALive: Not Supported 00:25:02.896 Namespace Granularity: Not Supported 00:25:02.896 SQ Associations: Not Supported 00:25:02.896 UUID List: Not Supported 00:25:02.896 Multi-Domain Subsystem: Not Supported 00:25:02.896 Fixed Capacity Management: Not Supported 00:25:02.896 Variable Capacity Management: Not Supported 00:25:02.896 Delete Endurance Group: Not Supported 00:25:02.896 Delete NVM Set: Not Supported 00:25:02.896 Extended LBA Formats Supported: Not Supported 00:25:02.896 Flexible Data Placement Supported: Not Supported 00:25:02.896 00:25:02.896 Controller Memory Buffer Support 00:25:02.896 ================================ 00:25:02.896 Supported: No 00:25:02.896 00:25:02.896 Persistent Memory Region Support 00:25:02.896 ================================ 00:25:02.896 Supported: No 00:25:02.896 00:25:02.896 Admin Command Set Attributes 00:25:02.896 ============================ 00:25:02.896 Security Send/Receive: Not Supported 00:25:02.896 Format NVM: Not Supported 00:25:02.896 Firmware Activate/Download: Not Supported 00:25:02.896 Namespace Management: Not Supported 00:25:02.896 Device Self-Test: Not Supported 00:25:02.896 Directives: Not Supported 00:25:02.896 NVMe-MI: Not Supported 00:25:02.896 Virtualization Management: Not Supported 00:25:02.896 Doorbell Buffer Config: Not Supported 00:25:02.896 Get LBA Status Capability: Not Supported 00:25:02.896 Command & Feature Lockdown Capability: Not Supported 00:25:02.896 Abort Command Limit: 4 00:25:02.896 Async Event Request Limit: 4 00:25:02.896 Number of Firmware Slots: N/A 00:25:02.896 Firmware Slot 1 Read-Only: N/A 00:25:02.896 Firmware Activation Without Reset: N/A 00:25:02.896 Multiple Update Detection Support: N/A 00:25:02.896 Firmware Update Granularity: No Information Provided 00:25:02.896 Per-Namespace SMART Log: No 00:25:02.896 Asymmetric Namespace Access Log Page: Not Supported 00:25:02.896 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:02.896 Command Effects Log Page: Supported 00:25:02.896 Get Log Page Extended Data: Supported 00:25:02.896 Telemetry Log Pages: Not Supported 00:25:02.896 Persistent Event Log Pages: Not Supported 00:25:02.896 Supported Log Pages Log Page: May Support 00:25:02.896 Commands Supported & Effects Log Page: Not Supported 00:25:02.896 Feature Identifiers & Effects Log Page:May Support 00:25:02.896 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.896 Data Area 4 for Telemetry Log: Not Supported 00:25:02.896 Error Log Page Entries Supported: 128 00:25:02.896 Keep Alive: Supported 00:25:02.896 Keep Alive Granularity: 10000 ms 00:25:02.896 00:25:02.896 NVM Command Set Attributes 00:25:02.896 ========================== 00:25:02.896 Submission Queue Entry Size 00:25:02.896 Max: 64 00:25:02.896 Min: 64 00:25:02.896 Completion Queue Entry Size 00:25:02.896 Max: 16 00:25:02.896 Min: 16 00:25:02.896 Number of Namespaces: 32 00:25:02.896 Compare Command: Supported 00:25:02.896 Write Uncorrectable Command: Not Supported 00:25:02.896 Dataset Management Command: Supported 00:25:02.896 Write Zeroes Command: Supported 00:25:02.896 Set Features Save Field: Not Supported 00:25:02.896 Reservations: Supported 00:25:02.896 Timestamp: Not Supported 00:25:02.896 Copy: Supported 00:25:02.896 Volatile Write Cache: Present 00:25:02.896 Atomic Write Unit (Normal): 1 00:25:02.896 Atomic Write Unit (PFail): 1 00:25:02.896 Atomic Compare & Write Unit: 1 00:25:02.896 Fused Compare & Write: Supported 00:25:02.896 Scatter-Gather List 00:25:02.897 SGL Command Set: Supported 00:25:02.897 SGL Keyed: Supported 00:25:02.897 SGL Bit Bucket Descriptor: Not Supported 00:25:02.897 SGL Metadata Pointer: Not Supported 00:25:02.897 Oversized SGL: Not Supported 00:25:02.897 SGL Metadata Address: Not Supported 00:25:02.897 SGL Offset: Supported 00:25:02.897 Transport SGL Data Block: Not Supported 00:25:02.897 Replay Protected Memory Block: Not Supported 00:25:02.897 00:25:02.897 Firmware Slot Information 00:25:02.897 ========================= 00:25:02.897 Active slot: 1 00:25:02.897 Slot 1 Firmware Revision: 24.09 00:25:02.897 00:25:02.897 00:25:02.897 Commands Supported and Effects 00:25:02.897 ============================== 00:25:02.897 Admin Commands 00:25:02.897 -------------- 00:25:02.897 Get Log Page (02h): Supported 00:25:02.897 Identify (06h): Supported 00:25:02.897 Abort (08h): Supported 00:25:02.897 Set Features (09h): Supported 00:25:02.897 Get Features (0Ah): Supported 00:25:02.897 Asynchronous Event Request (0Ch): Supported 00:25:02.897 Keep Alive (18h): Supported 00:25:02.897 I/O Commands 00:25:02.897 ------------ 00:25:02.897 Flush (00h): Supported LBA-Change 00:25:02.897 Write (01h): Supported LBA-Change 00:25:02.897 Read (02h): Supported 00:25:02.897 Compare (05h): Supported 00:25:02.897 Write Zeroes (08h): Supported LBA-Change 00:25:02.897 Dataset Management (09h): Supported LBA-Change 00:25:02.897 Copy (19h): Supported LBA-Change 00:25:02.897 00:25:02.897 Error Log 00:25:02.897 ========= 00:25:02.897 00:25:02.897 Arbitration 00:25:02.897 =========== 00:25:02.897 Arbitration Burst: 1 00:25:02.897 00:25:02.897 Power Management 00:25:02.897 ================ 00:25:02.897 Number of Power States: 1 00:25:02.897 Current Power State: Power State #0 00:25:02.897 Power State #0: 00:25:02.897 Max Power: 0.00 W 00:25:02.897 Non-Operational State: Operational 00:25:02.897 Entry Latency: Not Reported 00:25:02.897 Exit Latency: Not Reported 00:25:02.897 Relative Read Throughput: 0 00:25:02.897 Relative Read Latency: 0 00:25:02.897 Relative Write Throughput: 0 00:25:02.897 Relative Write Latency: 0 00:25:02.897 Idle Power: Not Reported 00:25:02.897 Active Power: Not Reported 00:25:02.897 Non-Operational Permissive Mode: Not Supported 00:25:02.897 00:25:02.897 Health Information 00:25:02.897 ================== 00:25:02.897 Critical Warnings: 00:25:02.897 Available Spare Space: OK 00:25:02.897 Temperature: OK 00:25:02.897 Device Reliability: OK 00:25:02.897 Read Only: No 00:25:02.897 Volatile Memory Backup: OK 00:25:02.897 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:02.897 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:02.897 Available Spare: 0% 00:25:02.897 Available Spare Threshold: 0% 00:25:02.897 Life Percentage Used:[2024-07-15 14:11:00.961307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.961319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.961330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a638c0, cid 7, qid 0 00:25:02.897 [2024-07-15 14:11:00.961507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.961514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.961517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a638c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961552] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:02.897 [2024-07-15 14:11:00.961561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62e40) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.897 [2024-07-15 14:11:00.961572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a62fc0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.897 [2024-07-15 14:11:00.961583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a63140) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.897 [2024-07-15 14:11:00.961593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.897 [2024-07-15 14:11:00.961605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.961619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.961630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.897 [2024-07-15 14:11:00.961800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.961807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.961811] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.961821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.961828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.961835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.961847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.897 [2024-07-15 14:11:00.962084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.962090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.962094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.962102] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:02.897 [2024-07-15 14:11:00.962106] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:02.897 [2024-07-15 14:11:00.962116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.962130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.962140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.897 [2024-07-15 14:11:00.962315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.962321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.962325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.962338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.962355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.962364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.897 [2024-07-15 14:11:00.962555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.962561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.962564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.962577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.962585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19dfec0) 00:25:02.897 [2024-07-15 14:11:00.962591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.897 [2024-07-15 14:11:00.962601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a632c0, cid 3, qid 0 00:25:02.897 [2024-07-15 14:11:00.966762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.897 [2024-07-15 14:11:00.966771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.897 [2024-07-15 14:11:00.966774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.897 [2024-07-15 14:11:00.966778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a632c0) on tqpair=0x19dfec0 00:25:02.897 [2024-07-15 14:11:00.966786] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:02.897 0% 00:25:02.897 Data Units Read: 0 00:25:02.897 Data Units Written: 0 00:25:02.897 Host Read Commands: 0 00:25:02.897 Host Write Commands: 0 00:25:02.897 Controller Busy Time: 0 minutes 00:25:02.897 Power Cycles: 0 00:25:02.897 Power On Hours: 0 hours 00:25:02.897 Unsafe Shutdowns: 0 00:25:02.897 Unrecoverable Media Errors: 0 00:25:02.897 Lifetime Error Log Entries: 0 00:25:02.897 Warning Temperature Time: 0 minutes 00:25:02.897 Critical Temperature Time: 0 minutes 00:25:02.897 00:25:02.897 Number of Queues 00:25:02.897 ================ 00:25:02.897 Number of I/O Submission Queues: 127 00:25:02.897 Number of I/O Completion Queues: 127 00:25:02.897 00:25:02.897 Active Namespaces 00:25:02.897 ================= 00:25:02.897 Namespace ID:1 00:25:02.897 Error Recovery Timeout: Unlimited 00:25:02.898 Command Set Identifier: NVM (00h) 00:25:02.898 Deallocate: Supported 00:25:02.898 Deallocated/Unwritten Error: Not Supported 00:25:02.898 Deallocated Read Value: Unknown 00:25:02.898 Deallocate in Write Zeroes: Not Supported 00:25:02.898 Deallocated Guard Field: 0xFFFF 00:25:02.898 Flush: Supported 00:25:02.898 Reservation: Supported 00:25:02.898 Namespace Sharing Capabilities: Multiple Controllers 00:25:02.898 Size (in LBAs): 131072 (0GiB) 00:25:02.898 Capacity (in LBAs): 131072 (0GiB) 00:25:02.898 Utilization (in LBAs): 131072 (0GiB) 00:25:02.898 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:02.898 EUI64: ABCDEF0123456789 00:25:02.898 UUID: 8878b2c7-978e-4eae-9846-e38cb283f8fc 00:25:02.898 Thin Provisioning: Not Supported 00:25:02.898 Per-NS Atomic Units: Yes 00:25:02.898 Atomic Boundary Size (Normal): 0 00:25:02.898 Atomic Boundary Size (PFail): 0 00:25:02.898 Atomic Boundary Offset: 0 00:25:02.898 Maximum Single Source Range Length: 65535 00:25:02.898 Maximum Copy Length: 65535 00:25:02.898 Maximum Source Range Count: 1 00:25:02.898 NGUID/EUI64 Never Reused: No 00:25:02.898 Namespace Write Protected: No 00:25:02.898 Number of LBA Formats: 1 00:25:02.898 Current LBA Format: LBA Format #00 00:25:02.898 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:02.898 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.898 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:03.158 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.158 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:03.158 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.158 14:11:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.158 rmmod nvme_tcp 00:25:03.158 rmmod nvme_fabrics 00:25:03.158 rmmod nvme_keyring 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1473472 ']' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1473472 ']' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473472' 00:25:03.158 killing process with pid 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1473472 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.158 14:11:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.707 14:11:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.707 00:25:05.707 real 0m11.914s 00:25:05.707 user 0m8.017s 00:25:05.707 sys 0m6.394s 00:25:05.707 14:11:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.707 14:11:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:05.707 ************************************ 00:25:05.707 END TEST nvmf_identify 00:25:05.707 ************************************ 00:25:05.707 14:11:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:05.707 14:11:03 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:05.707 14:11:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.707 14:11:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.707 14:11:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.707 ************************************ 00:25:05.707 START TEST nvmf_perf 00:25:05.707 ************************************ 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:05.707 * Looking for test storage... 00:25:05.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.707 14:11:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.858 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:13.859 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:13.859 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:13.859 Found net devices under 0000:31:00.0: cvl_0_0 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:13.859 Found net devices under 0000:31:00.1: cvl_0_1 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:25:13.859 00:25:13.859 --- 10.0.0.2 ping statistics --- 00:25:13.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.859 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:25:13.859 00:25:13.859 --- 10.0.0.1 ping statistics --- 00:25:13.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.859 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1478496 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1478496 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1478496 ']' 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.859 14:11:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:13.859 [2024-07-15 14:11:11.858590] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:13.859 [2024-07-15 14:11:11.858655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.859 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.859 [2024-07-15 14:11:11.940181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.120 [2024-07-15 14:11:12.014157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.120 [2024-07-15 14:11:12.014196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.120 [2024-07-15 14:11:12.014204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.120 [2024-07-15 14:11:12.014210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.120 [2024-07-15 14:11:12.014216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.120 [2024-07-15 14:11:12.014353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.120 [2024-07-15 14:11:12.014469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.120 [2024-07-15 14:11:12.014629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.120 [2024-07-15 14:11:12.014630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.750 14:11:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.750 14:11:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:14.750 14:11:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.751 14:11:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.751 14:11:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.751 14:11:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.751 14:11:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:14.751 14:11:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:15.323 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:15.323 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:15.323 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:15.323 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:15.583 [2024-07-15 14:11:13.656880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.583 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.844 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:15.844 14:11:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.105 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:16.105 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:16.105 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.366 [2024-07-15 14:11:14.323393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.366 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.628 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:16.628 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:16.628 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:16.628 14:11:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:18.009 Initializing NVMe Controllers 00:25:18.009 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:18.009 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:18.009 Initialization complete. Launching workers. 00:25:18.009 ======================================================== 00:25:18.009 Latency(us) 00:25:18.009 Device Information : IOPS MiB/s Average min max 00:25:18.009 PCIE (0000:65:00.0) NSID 1 from core 0: 79357.76 309.99 402.69 13.31 5317.88 00:25:18.009 ======================================================== 00:25:18.009 Total : 79357.76 309.99 402.69 13.31 5317.88 00:25:18.009 00:25:18.009 14:11:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:18.009 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.389 Initializing NVMe Controllers 00:25:19.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:19.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:19.389 Initialization complete. Launching workers. 00:25:19.389 ======================================================== 00:25:19.389 Latency(us) 00:25:19.389 Device Information : IOPS MiB/s Average min max 00:25:19.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.00 0.41 9778.56 114.48 46208.84 00:25:19.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18081.88 5986.66 54854.32 00:25:19.389 ======================================================== 00:25:19.390 Total : 162.00 0.63 12648.84 114.48 54854.32 00:25:19.390 00:25:19.390 14:11:17 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:19.390 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.329 Initializing NVMe Controllers 00:25:20.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:20.330 Initialization complete. Launching workers. 00:25:20.330 ======================================================== 00:25:20.330 Latency(us) 00:25:20.330 Device Information : IOPS MiB/s Average min max 00:25:20.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11441.11 44.69 2797.00 481.24 6415.70 00:25:20.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3889.36 15.19 8271.20 6946.48 16094.83 00:25:20.330 ======================================================== 00:25:20.330 Total : 15330.47 59.88 4185.81 481.24 16094.83 00:25:20.330 00:25:20.330 14:11:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:20.330 14:11:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:20.330 14:11:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:20.590 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.129 Initializing NVMe Controllers 00:25:23.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.129 Controller IO queue size 128, less than required. 00:25:23.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.129 Controller IO queue size 128, less than required. 00:25:23.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:23.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:23.129 Initialization complete. Launching workers. 00:25:23.129 ======================================================== 00:25:23.129 Latency(us) 00:25:23.129 Device Information : IOPS MiB/s Average min max 00:25:23.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1459.65 364.91 89529.24 59255.68 135374.42 00:25:23.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.35 155.34 215892.14 75995.51 338717.45 00:25:23.129 ======================================================== 00:25:23.130 Total : 2080.99 520.25 127259.02 59255.68 338717.45 00:25:23.130 00:25:23.130 14:11:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:23.130 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.390 No valid NVMe controllers or AIO or URING devices found 00:25:23.390 Initializing NVMe Controllers 00:25:23.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.390 Controller IO queue size 128, less than required. 00:25:23.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:23.390 Controller IO queue size 128, less than required. 00:25:23.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:23.390 WARNING: Some requested NVMe devices were skipped 00:25:23.390 14:11:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:23.390 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.949 Initializing NVMe Controllers 00:25:25.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.949 Controller IO queue size 128, less than required. 00:25:25.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:25.949 Controller IO queue size 128, less than required. 00:25:25.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:25.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.949 Initialization complete. Launching workers. 00:25:25.949 00:25:25.949 ==================== 00:25:25.949 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:25.949 TCP transport: 00:25:25.949 polls: 24398 00:25:25.949 idle_polls: 10503 00:25:25.949 sock_completions: 13895 00:25:25.949 nvme_completions: 6559 00:25:25.949 submitted_requests: 9840 00:25:25.949 queued_requests: 1 00:25:25.949 00:25:25.949 ==================== 00:25:25.949 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:25.949 TCP transport: 00:25:25.949 polls: 24972 00:25:25.949 idle_polls: 12964 00:25:25.949 sock_completions: 12008 00:25:25.949 nvme_completions: 5925 00:25:25.949 submitted_requests: 8880 00:25:25.949 queued_requests: 1 00:25:25.949 ======================================================== 00:25:25.949 Latency(us) 00:25:25.949 Device Information : IOPS MiB/s Average min max 00:25:25.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1639.31 409.83 79312.68 42585.59 127482.22 00:25:25.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1480.83 370.21 87939.42 55644.42 141531.23 00:25:25.949 ======================================================== 00:25:25.949 Total : 3120.14 780.04 83406.96 42585.59 141531.23 00:25:25.949 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.949 rmmod nvme_tcp 00:25:25.949 rmmod nvme_fabrics 00:25:25.949 rmmod nvme_keyring 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1478496 ']' 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1478496 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1478496 ']' 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1478496 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:25.949 14:11:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1478496 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1478496' 00:25:25.949 killing process with pid 1478496 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1478496 00:25:25.949 14:11:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1478496 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.495 14:11:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.438 14:11:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.438 00:25:30.438 real 0m24.685s 00:25:30.438 user 0m57.812s 00:25:30.438 sys 0m8.635s 00:25:30.438 14:11:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.438 14:11:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:30.438 ************************************ 00:25:30.438 END TEST nvmf_perf 00:25:30.438 ************************************ 00:25:30.438 14:11:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:30.438 14:11:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:30.438 14:11:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:30.438 14:11:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.438 14:11:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.438 ************************************ 00:25:30.438 START TEST nvmf_fio_host 00:25:30.438 ************************************ 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:30.438 * Looking for test storage... 00:25:30.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.438 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.439 14:11:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:38.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:38.588 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:38.588 Found net devices under 0000:31:00.0: cvl_0_0 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:38.588 Found net devices under 0000:31:00.1: cvl_0_1 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.588 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.589 14:11:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:25:38.589 00:25:38.589 --- 10.0.0.2 ping statistics --- 00:25:38.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.589 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:25:38.589 00:25:38.589 --- 10.0.0.1 ping statistics --- 00:25:38.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.589 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1485810 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1485810 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1485810 ']' 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.589 14:11:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.589 [2024-07-15 14:11:36.312491] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:38.589 [2024-07-15 14:11:36.312552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.589 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.589 [2024-07-15 14:11:36.392315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.589 [2024-07-15 14:11:36.467712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.589 [2024-07-15 14:11:36.467757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.589 [2024-07-15 14:11:36.467766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.589 [2024-07-15 14:11:36.467773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.589 [2024-07-15 14:11:36.467778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.589 [2024-07-15 14:11:36.467840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.589 [2024-07-15 14:11:36.467958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.589 [2024-07-15 14:11:36.468114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.589 [2024-07-15 14:11:36.468115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:39.159 [2024-07-15 14:11:37.235665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.159 14:11:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.420 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:39.420 Malloc1 00:25:39.420 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:39.681 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:39.943 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.943 [2024-07-15 14:11:37.969100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.943 14:11:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:40.205 14:11:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.466 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:40.467 fio-3.35 00:25:40.467 Starting 1 thread 00:25:40.728 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.278 00:25:43.278 test: (groupid=0, jobs=1): err= 0: pid=1486436: Mon Jul 15 14:11:40 2024 00:25:43.278 read: IOPS=9711, BW=37.9MiB/s (39.8MB/s)(76.1MiB/2006msec) 00:25:43.278 slat (usec): min=2, max=284, avg= 2.20, stdev= 2.93 00:25:43.278 clat (usec): min=3348, max=12382, avg=7258.22, stdev=560.80 00:25:43.278 lat (usec): min=3384, max=12384, avg=7260.42, stdev=560.58 00:25:43.278 clat percentiles (usec): 00:25:43.278 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6849], 00:25:43.278 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:25:43.278 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8094], 00:25:43.278 | 99.00th=[ 8717], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[11338], 00:25:43.278 | 99.99th=[12256] 00:25:43.278 bw ( KiB/s): min=37964, max=39536, per=99.89%, avg=38803.00, stdev=644.98, samples=4 00:25:43.278 iops : min= 9491, max= 9884, avg=9700.75, stdev=161.24, samples=4 00:25:43.278 write: IOPS=9720, BW=38.0MiB/s (39.8MB/s)(76.2MiB/2006msec); 0 zone resets 00:25:43.278 slat (usec): min=2, max=215, avg= 2.29, stdev= 1.76 00:25:43.278 clat (usec): min=2688, max=11618, avg=5834.13, stdev=488.73 00:25:43.278 lat (usec): min=2706, max=11621, avg=5836.42, stdev=488.58 00:25:43.278 clat percentiles (usec): 00:25:43.278 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5473], 00:25:43.278 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:25:43.278 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:25:43.278 | 99.00th=[ 7373], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[ 9896], 00:25:43.278 | 99.99th=[10814] 00:25:43.278 bw ( KiB/s): min=38488, max=39552, per=99.96%, avg=38868.50, stdev=494.61, samples=4 00:25:43.278 iops : min= 9622, max= 9888, avg=9717.00, stdev=123.77, samples=4 00:25:43.278 lat (msec) : 4=0.11%, 10=99.70%, 20=0.19% 00:25:43.278 cpu : usr=73.27%, sys=25.04%, ctx=44, majf=0, minf=6 00:25:43.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:43.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.278 issued rwts: total=19482,19500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.278 00:25:43.278 Run status group 0 (all jobs): 00:25:43.278 READ: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:25:43.278 WRITE: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=76.2MiB (79.9MB), run=2006-2006msec 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:43.278 14:11:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:43.278 14:11:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:43.278 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:43.278 fio-3.35 00:25:43.278 Starting 1 thread 00:25:43.539 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.153 00:25:46.153 test: (groupid=0, jobs=1): err= 0: pid=1487183: Mon Jul 15 14:11:43 2024 00:25:46.153 read: IOPS=9341, BW=146MiB/s (153MB/s)(293MiB/2006msec) 00:25:46.153 slat (usec): min=3, max=110, avg= 3.64, stdev= 1.58 00:25:46.153 clat (usec): min=2176, max=18844, avg=8158.48, stdev=1858.38 00:25:46.153 lat (usec): min=2180, max=18848, avg=8162.12, stdev=1858.51 00:25:46.153 clat percentiles (usec): 00:25:46.153 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6521], 00:25:46.153 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:25:46.153 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:25:46.153 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14353], 99.95th=[14746], 00:25:46.153 | 99.99th=[15008] 00:25:46.153 bw ( KiB/s): min=72320, max=76704, per=49.83%, avg=74480.00, stdev=1928.96, samples=4 00:25:46.153 iops : min= 4520, max= 4794, avg=4655.00, stdev=120.56, samples=4 00:25:46.153 write: IOPS=5291, BW=82.7MiB/s (86.7MB/s)(151MiB/1831msec); 0 zone resets 00:25:46.153 slat (usec): min=40, max=359, avg=41.07, stdev= 6.92 00:25:46.153 clat (usec): min=2655, max=15135, avg=9530.16, stdev=1593.34 00:25:46.153 lat (usec): min=2695, max=15175, avg=9571.23, stdev=1594.41 00:25:46.153 clat percentiles (usec): 00:25:46.153 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:46.153 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:25:46.153 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:25:46.153 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14877], 99.95th=[15008], 00:25:46.153 | 99.99th=[15139] 00:25:46.153 bw ( KiB/s): min=74592, max=79872, per=91.02%, avg=77064.00, stdev=2403.39, samples=4 00:25:46.153 iops : min= 4662, max= 4992, avg=4816.50, stdev=150.21, samples=4 00:25:46.153 lat (msec) : 4=0.59%, 10=76.53%, 20=22.89% 00:25:46.153 cpu : usr=85.34%, sys=13.22%, ctx=15, majf=0, minf=8 00:25:46.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:46.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.153 issued rwts: total=18740,9689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.153 00:25:46.153 Run status group 0 (all jobs): 00:25:46.153 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=293MiB (307MB), run=2006-2006msec 00:25:46.153 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=151MiB (159MB), run=1831-1831msec 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.153 rmmod nvme_tcp 00:25:46.153 rmmod nvme_fabrics 00:25:46.153 rmmod nvme_keyring 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:46.153 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1485810 ']' 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1485810 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1485810 ']' 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1485810 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.154 14:11:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1485810 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1485810' 00:25:46.154 killing process with pid 1485810 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1485810 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1485810 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.154 14:11:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.704 14:11:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.704 00:25:48.704 real 0m18.060s 00:25:48.704 user 1m6.268s 00:25:48.704 sys 0m7.840s 00:25:48.704 14:11:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.704 14:11:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.704 ************************************ 00:25:48.704 END TEST nvmf_fio_host 00:25:48.704 ************************************ 00:25:48.704 14:11:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:48.704 14:11:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.704 14:11:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:48.704 14:11:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.704 14:11:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.704 ************************************ 00:25:48.704 START TEST nvmf_failover 00:25:48.704 ************************************ 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.704 * Looking for test storage... 00:25:48.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.704 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.705 14:11:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:56.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:56.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:56.846 Found net devices under 0000:31:00.0: cvl_0_0 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:56.846 Found net devices under 0000:31:00.1: cvl_0_1 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:56.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:25:56.846 00:25:56.846 --- 10.0.0.2 ping statistics --- 00:25:56.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.846 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:25:56.846 00:25:56.846 --- 10.0.0.1 ping statistics --- 00:25:56.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.846 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:56.846 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1492277 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1492277 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1492277 ']' 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.847 14:11:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 [2024-07-15 14:11:54.520293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:56.847 [2024-07-15 14:11:54.520357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.847 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.847 [2024-07-15 14:11:54.615901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:56.847 [2024-07-15 14:11:54.709926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.847 [2024-07-15 14:11:54.709986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.847 [2024-07-15 14:11:54.709994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.847 [2024-07-15 14:11:54.710001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.847 [2024-07-15 14:11:54.710008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.847 [2024-07-15 14:11:54.710139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.847 [2024-07-15 14:11:54.710303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.847 [2024-07-15 14:11:54.710304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:57.420 [2024-07-15 14:11:55.488300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.420 14:11:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:57.681 Malloc0 00:25:57.681 14:11:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.942 14:11:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.202 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.202 [2024-07-15 14:11:56.200956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.202 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:58.464 [2024-07-15 14:11:56.369403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:58.464 [2024-07-15 14:11:56.529927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1492641 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1492641 /var/tmp/bdevperf.sock 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1492641 ']' 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.464 14:11:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.409 14:11:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.409 14:11:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:59.409 14:11:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.670 NVMe0n1 00:25:59.670 14:11:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.931 00:25:59.931 14:11:58 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1492976 00:25:59.931 14:11:58 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:59.931 14:11:58 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.313 14:11:59 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.313 [2024-07-15 14:11:59.169115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.313 [2024-07-15 14:11:59.169319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 [2024-07-15 14:11:59.169364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73770 is same with the state(5) to be set 00:26:01.314 14:11:59 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:04.619 14:12:02 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:04.619 00:26:04.620 14:12:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.880 14:12:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:08.179 14:12:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.179 [2024-07-15 14:12:05.918387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.179 14:12:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:09.122 14:12:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:09.122 [2024-07-15 14:12:07.092672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 [2024-07-15 14:12:07.092883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75550 is same with the state(5) to be set 00:26:09.122 14:12:07 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1492976 00:26:15.714 0 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1492641 ']' 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1492641' 00:26:15.714 killing process with pid 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1492641 00:26:15.714 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.714 [2024-07-15 14:11:56.595801] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:15.714 [2024-07-15 14:11:56.595860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492641 ] 00:26:15.714 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.714 [2024-07-15 14:11:56.661445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.714 [2024-07-15 14:11:56.725383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.714 Running I/O for 15 seconds... 00:26:15.714 [2024-07-15 14:11:59.169931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.169966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.714 [2024-07-15 14:11:59.170368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.714 [2024-07-15 14:11:59.170378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.170988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.170995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.715 [2024-07-15 14:11:59.171098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.715 [2024-07-15 14:11:59.171107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.716 [2024-07-15 14:11:59.171822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.716 [2024-07-15 14:11:59.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.717 [2024-07-15 14:11:59.171838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.171990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.171999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:11:59.172110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.717 [2024-07-15 14:11:59.172137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.717 [2024-07-15 14:11:59.172147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:26:15.717 [2024-07-15 14:11:59.172154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172193] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17a6df0 was disconnected and freed. reset controller. 00:26:15.717 [2024-07-15 14:11:59.172202] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:15.717 [2024-07-15 14:11:59.172221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.717 [2024-07-15 14:11:59.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.717 [2024-07-15 14:11:59.172245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.717 [2024-07-15 14:11:59.172261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.717 [2024-07-15 14:11:59.172278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:11:59.172286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:15.717 [2024-07-15 14:11:59.175898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:15.717 [2024-07-15 14:11:59.175923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aaea0 (9): Bad file descriptor 00:26:15.717 [2024-07-15 14:11:59.256324] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:15.717 [2024-07-15 14:12:02.742491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.717 [2024-07-15 14:12:02.742828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.717 [2024-07-15 14:12:02.742835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.742985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.742993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.718 [2024-07-15 14:12:02.743332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.718 [2024-07-15 14:12:02.743440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.718 [2024-07-15 14:12:02.743448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.719 [2024-07-15 14:12:02.743695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.719 [2024-07-15 14:12:02.743882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.743910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44256 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.743917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.743933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.743939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44264 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.743946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.743960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.743966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44272 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.743973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.743980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.743986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.743992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44280 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.743999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44288 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.719 [2024-07-15 14:12:02.744143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.719 [2024-07-15 14:12:02.744151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:26:15.719 [2024-07-15 14:12:02.744159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.719 [2024-07-15 14:12:02.744166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45008 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45016 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45040 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45088 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45104 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45112 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44296 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44304 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.720 [2024-07-15 14:12:02.744812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44312 len:8 PRP1 0x0 PRP2 0x0 00:26:15.720 [2024-07-15 14:12:02.744819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.720 [2024-07-15 14:12:02.744827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.720 [2024-07-15 14:12:02.744832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.744837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44320 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.744845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.744853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.744858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.744864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44328 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.744871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.744879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.744885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.744891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44336 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44344 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44352 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44360 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44368 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44376 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44384 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44392 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44400 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44408 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.721 [2024-07-15 14:12:02.755773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.721 [2024-07-15 14:12:02.755779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44416 len:8 PRP1 0x0 PRP2 0x0 00:26:15.721 [2024-07-15 14:12:02.755786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755825] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d97c0 was disconnected and freed. reset controller. 00:26:15.721 [2024-07-15 14:12:02.755835] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:15.721 [2024-07-15 14:12:02.755862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.721 [2024-07-15 14:12:02.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.721 [2024-07-15 14:12:02.755888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.721 [2024-07-15 14:12:02.755904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.721 [2024-07-15 14:12:02.755919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:02.755926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:15.721 [2024-07-15 14:12:02.755954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aaea0 (9): Bad file descriptor 00:26:15.721 [2024-07-15 14:12:02.759500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:15.721 [2024-07-15 14:12:02.800351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:15.721 [2024-07-15 14:12:07.093659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.721 [2024-07-15 14:12:07.093696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.721 [2024-07-15 14:12:07.093727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.721 [2024-07-15 14:12:07.093948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.721 [2024-07-15 14:12:07.093955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.093964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.093980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.093988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.093997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.722 [2024-07-15 14:12:07.094390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.722 [2024-07-15 14:12:07.094572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.722 [2024-07-15 14:12:07.094581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.094941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.094958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.094974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.094983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.094990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.095008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.095025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.723 [2024-07-15 14:12:07.095059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.723 [2024-07-15 14:12:07.095287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.723 [2024-07-15 14:12:07.095296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.724 [2024-07-15 14:12:07.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.724 [2024-07-15 14:12:07.095589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.724 [2024-07-15 14:12:07.095605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.724 [2024-07-15 14:12:07.095621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59000 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59008 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59016 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59024 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59032 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59624 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59040 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59048 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59056 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59064 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59072 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59080 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.095975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.095981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59088 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.095987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.095995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.724 [2024-07-15 14:12:07.096001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.724 [2024-07-15 14:12:07.096007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59096 len:8 PRP1 0x0 PRP2 0x0 00:26:15.724 [2024-07-15 14:12:07.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.724 [2024-07-15 14:12:07.096049] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e75b0 was disconnected and freed. reset controller. 00:26:15.724 [2024-07-15 14:12:07.096059] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:15.724 [2024-07-15 14:12:07.096080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.725 [2024-07-15 14:12:07.096087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.725 [2024-07-15 14:12:07.106191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.725 [2024-07-15 14:12:07.106221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.725 [2024-07-15 14:12:07.106231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.725 [2024-07-15 14:12:07.106238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.725 [2024-07-15 14:12:07.106251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.725 [2024-07-15 14:12:07.106259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.725 [2024-07-15 14:12:07.106267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:15.725 [2024-07-15 14:12:07.106312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aaea0 (9): Bad file descriptor 00:26:15.725 [2024-07-15 14:12:07.109879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:15.725 [2024-07-15 14:12:07.314110] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:15.725 00:26:15.725 Latency(us) 00:26:15.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.725 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:15.725 Verification LBA range: start 0x0 length 0x4000 00:26:15.725 NVMe0n1 : 15.01 11256.58 43.97 777.55 0.00 10608.72 501.76 18022.40 00:26:15.725 =================================================================================================================== 00:26:15.725 Total : 11256.58 43.97 777.55 0.00 10608.72 501.76 18022.40 00:26:15.725 Received shutdown signal, test time was about 15.000000 seconds 00:26:15.725 00:26:15.725 Latency(us) 00:26:15.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.725 =================================================================================================================== 00:26:15.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1496499 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1496499 /var/tmp/bdevperf.sock 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1496499 ']' 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.725 14:12:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.297 14:12:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.297 14:12:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:16.297 14:12:14 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.297 [2024-07-15 14:12:14.319979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.297 14:12:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:16.558 [2024-07-15 14:12:14.480347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:16.558 14:12:14 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.819 NVMe0n1 00:26:16.819 14:12:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:17.111 00:26:17.111 14:12:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:17.372 00:26:17.372 14:12:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:17.372 14:12:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:17.372 14:12:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:17.633 14:12:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:20.938 14:12:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:20.938 14:12:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:20.938 14:12:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1497561 00:26:20.938 14:12:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.938 14:12:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1497561 00:26:21.882 0 00:26:21.882 14:12:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:21.882 [2024-07-15 14:12:13.414482] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:21.882 [2024-07-15 14:12:13.414540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496499 ] 00:26:21.882 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.882 [2024-07-15 14:12:13.480556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.882 [2024-07-15 14:12:13.543735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.882 [2024-07-15 14:12:15.593964] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:21.882 [2024-07-15 14:12:15.594013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.882 [2024-07-15 14:12:15.594025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.882 [2024-07-15 14:12:15.594036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.882 [2024-07-15 14:12:15.594043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.882 [2024-07-15 14:12:15.594051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.882 [2024-07-15 14:12:15.594059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.882 [2024-07-15 14:12:15.594067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.882 [2024-07-15 14:12:15.594074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.882 [2024-07-15 14:12:15.594081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.882 [2024-07-15 14:12:15.594109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.882 [2024-07-15 14:12:15.594124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf9ea0 (9): Bad file descriptor 00:26:21.882 [2024-07-15 14:12:15.607590] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:21.882 Running I/O for 1 seconds... 00:26:21.882 00:26:21.882 Latency(us) 00:26:21.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:21.882 Verification LBA range: start 0x0 length 0x4000 00:26:21.882 NVMe0n1 : 1.01 11208.09 43.78 0.00 0.00 11367.39 2662.40 10048.85 00:26:21.882 =================================================================================================================== 00:26:21.882 Total : 11208.09 43.78 0.00 0.00 11367.39 2662.40 10048.85 00:26:21.882 14:12:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:21.882 14:12:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:22.143 14:12:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.410 14:12:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.410 14:12:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:22.410 14:12:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.674 14:12:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1496499 ']' 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1496499' 00:26:25.978 killing process with pid 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1496499 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:25.978 14:12:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.239 rmmod nvme_tcp 00:26:26.239 rmmod nvme_fabrics 00:26:26.239 rmmod nvme_keyring 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1492277 ']' 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1492277 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1492277 ']' 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1492277 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1492277 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1492277' 00:26:26.239 killing process with pid 1492277 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1492277 00:26:26.239 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1492277 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.501 14:12:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.417 14:12:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.417 00:26:28.417 real 0m40.146s 00:26:28.417 user 2m1.535s 00:26:28.417 sys 0m8.700s 00:26:28.417 14:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.417 14:12:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.417 ************************************ 00:26:28.417 END TEST nvmf_failover 00:26:28.417 ************************************ 00:26:28.417 14:12:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:28.417 14:12:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.417 14:12:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:28.417 14:12:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.417 14:12:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:28.679 ************************************ 00:26:28.679 START TEST nvmf_host_discovery 00:26:28.679 ************************************ 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.679 * Looking for test storage... 00:26:28.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.679 14:12:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:36.831 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:36.831 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:36.831 Found net devices under 0000:31:00.0: cvl_0_0 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:36.831 Found net devices under 0000:31:00.1: cvl_0_1 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:26:36.831 00:26:36.831 --- 10.0.0.2 ping statistics --- 00:26:36.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.831 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:26:36.831 00:26:36.831 --- 10.0.0.1 ping statistics --- 00:26:36.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.831 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1503213 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1503213 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1503213 ']' 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.831 14:12:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 [2024-07-15 14:12:34.482081] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:36.831 [2024-07-15 14:12:34.482146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.831 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.831 [2024-07-15 14:12:34.574446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.831 [2024-07-15 14:12:34.629057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.831 [2024-07-15 14:12:34.629089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.831 [2024-07-15 14:12:34.629094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.831 [2024-07-15 14:12:34.629099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.831 [2024-07-15 14:12:34.629103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.831 [2024-07-15 14:12:34.629123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 [2024-07-15 14:12:35.289716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 [2024-07-15 14:12:35.297861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 null0 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 null1 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1503277 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1503277 /tmp/host.sock 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1503277 ']' 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:37.402 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.402 14:12:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.402 [2024-07-15 14:12:35.388521] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:37.402 [2024-07-15 14:12:35.388568] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503277 ] 00:26:37.402 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.402 [2024-07-15 14:12:35.453961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.664 [2024-07-15 14:12:35.518710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.266 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 [2024-07-15 14:12:36.496874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:38.528 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:38.529 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:38.790 14:12:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:39.362 [2024-07-15 14:12:37.198950] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.362 [2024-07-15 14:12:37.198972] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.363 [2024-07-15 14:12:37.198985] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.363 [2024-07-15 14:12:37.287252] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:39.628 [2024-07-15 14:12:37.514231] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:39.628 [2024-07-15 14:12:37.514252] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.628 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.889 14:12:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.150 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 [2024-07-15 14:12:38.024875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.151 [2024-07-15 14:12:38.025072] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:40.151 [2024-07-15 14:12:38.025096] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.151 [2024-07-15 14:12:38.113781] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:40.151 14:12:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:40.411 [2024-07-15 14:12:38.415118] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:40.411 [2024-07-15 14:12:38.415135] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.411 [2024-07-15 14:12:38.415141] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.357 [2024-07-15 14:12:39.256702] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:41.357 [2024-07-15 14:12:39.256723] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:41.357 [2024-07-15 14:12:39.265795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.357 [2024-07-15 14:12:39.265815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.357 [2024-07-15 14:12:39.265824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.357 [2024-07-15 14:12:39.265832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.357 [2024-07-15 14:12:39.265840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.357 [2024-07-15 14:12:39.265851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.357 [2024-07-15 14:12:39.265860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.357 [2024-07-15 14:12:39.265867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.357 [2024-07-15 14:12:39.265875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:41.357 [2024-07-15 14:12:39.275809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.357 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.357 [2024-07-15 14:12:39.285847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.357 [2024-07-15 14:12:39.286217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.357 [2024-07-15 14:12:39.286232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.357 [2024-07-15 14:12:39.286240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.357 [2024-07-15 14:12:39.286252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.357 [2024-07-15 14:12:39.286263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.357 [2024-07-15 14:12:39.286270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.357 [2024-07-15 14:12:39.286278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.357 [2024-07-15 14:12:39.286290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 [2024-07-15 14:12:39.295902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.296249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.296261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.296268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.296279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.296290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.296296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.296303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.296314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 [2024-07-15 14:12:39.305954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.306322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.306334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.306341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.306352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.306363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.306369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.306376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.306386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.358 [2024-07-15 14:12:39.316013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:41.358 [2024-07-15 14:12:39.316341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.316353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.316360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.316371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.316381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.316387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.316394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.316405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:41.358 [2024-07-15 14:12:39.326214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.326556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.326570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.326581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.326592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.326603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.326610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.326617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.326627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 [2024-07-15 14:12:39.336268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.336645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.336657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.336664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.336675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.336685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.336691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.336698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.336708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 [2024-07-15 14:12:39.346320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.346654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.346666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.346674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.346685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.346695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.346701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.346708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.346719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 [2024-07-15 14:12:39.356373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.356679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.356693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.356700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.356711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.356722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.356729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.356740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.356756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:41.358 [2024-07-15 14:12:39.366425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:41.358 [2024-07-15 14:12:39.366791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.366804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.366812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.358 [2024-07-15 14:12:39.366823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.358 [2024-07-15 14:12:39.366833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.358 [2024-07-15 14:12:39.366839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.358 [2024-07-15 14:12:39.366846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.358 [2024-07-15 14:12:39.366856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:41.358 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.358 [2024-07-15 14:12:39.376476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.358 [2024-07-15 14:12:39.376960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.358 [2024-07-15 14:12:39.376998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc219a0 with addr=10.0.0.2, port=4420 00:26:41.358 [2024-07-15 14:12:39.377009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc219a0 is same with the state(5) to be set 00:26:41.359 [2024-07-15 14:12:39.377028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc219a0 (9): Bad file descriptor 00:26:41.359 [2024-07-15 14:12:39.377040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.359 [2024-07-15 14:12:39.377047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.359 [2024-07-15 14:12:39.377060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.359 [2024-07-15 14:12:39.377075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.359 [2024-07-15 14:12:39.386418] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:41.359 [2024-07-15 14:12:39.386437] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:41.359 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.359 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:41.359 14:12:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.745 14:12:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.689 [2024-07-15 14:12:41.750932] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.689 [2024-07-15 14:12:41.750953] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.689 [2024-07-15 14:12:41.750966] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.951 [2024-07-15 14:12:41.839232] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:43.951 [2024-07-15 14:12:41.904173] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.951 [2024-07-15 14:12:41.904205] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.951 request: 00:26:43.951 { 00:26:43.951 "name": "nvme", 00:26:43.951 "trtype": "tcp", 00:26:43.951 "traddr": "10.0.0.2", 00:26:43.951 "adrfam": "ipv4", 00:26:43.951 "trsvcid": "8009", 00:26:43.951 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:43.951 "wait_for_attach": true, 00:26:43.951 "method": "bdev_nvme_start_discovery", 00:26:43.951 "req_id": 1 00:26:43.951 } 00:26:43.951 Got JSON-RPC error response 00:26:43.951 response: 00:26:43.951 { 00:26:43.951 "code": -17, 00:26:43.951 "message": "File exists" 00:26:43.951 } 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.951 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.952 14:12:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.952 request: 00:26:43.952 { 00:26:43.952 "name": "nvme_second", 00:26:43.952 "trtype": "tcp", 00:26:43.952 "traddr": "10.0.0.2", 00:26:43.952 "adrfam": "ipv4", 00:26:43.952 "trsvcid": "8009", 00:26:43.952 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:43.952 "wait_for_attach": true, 00:26:43.952 "method": "bdev_nvme_start_discovery", 00:26:43.952 "req_id": 1 00:26:43.952 } 00:26:43.952 Got JSON-RPC error response 00:26:43.952 response: 00:26:43.952 { 00:26:43.952 "code": -17, 00:26:43.952 "message": "File exists" 00:26:43.952 } 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:43.952 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.214 14:12:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 [2024-07-15 14:12:43.167831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-07-15 14:12:43.167862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5fd80 with addr=10.0.0.2, port=8010 00:26:45.158 [2024-07-15 14:12:43.167876] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:45.158 [2024-07-15 14:12:43.167884] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:45.158 [2024-07-15 14:12:43.167891] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:46.133 [2024-07-15 14:12:44.170173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 14:12:44.170196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5fd80 with addr=10.0.0.2, port=8010 00:26:46.133 [2024-07-15 14:12:44.170207] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:46.133 [2024-07-15 14:12:44.170214] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:46.133 [2024-07-15 14:12:44.170220] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:47.148 [2024-07-15 14:12:45.172175] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:47.148 request: 00:26:47.148 { 00:26:47.148 "name": "nvme_second", 00:26:47.148 "trtype": "tcp", 00:26:47.148 "traddr": "10.0.0.2", 00:26:47.148 "adrfam": "ipv4", 00:26:47.148 "trsvcid": "8010", 00:26:47.148 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:47.148 "wait_for_attach": false, 00:26:47.148 "attach_timeout_ms": 3000, 00:26:47.148 "method": "bdev_nvme_start_discovery", 00:26:47.148 "req_id": 1 00:26:47.148 } 00:26:47.148 Got JSON-RPC error response 00:26:47.148 response: 00:26:47.148 { 00:26:47.148 "code": -110, 00:26:47.148 "message": "Connection timed out" 00:26:47.148 } 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1503277 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.148 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.148 rmmod nvme_tcp 00:26:47.409 rmmod nvme_fabrics 00:26:47.409 rmmod nvme_keyring 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1503213 ']' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1503213 ']' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503213' 00:26:47.409 killing process with pid 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1503213 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.409 14:12:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.957 00:26:49.957 real 0m21.001s 00:26:49.957 user 0m24.813s 00:26:49.957 sys 0m7.140s 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 ************************************ 00:26:49.957 END TEST nvmf_host_discovery 00:26:49.957 ************************************ 00:26:49.957 14:12:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:49.957 14:12:47 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:49.957 14:12:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:49.957 14:12:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.957 14:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.957 ************************************ 00:26:49.957 START TEST nvmf_host_multipath_status 00:26:49.957 ************************************ 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:49.957 * Looking for test storage... 00:26:49.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.957 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.958 14:12:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:58.099 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:58.099 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:58.099 Found net devices under 0000:31:00.0: cvl_0_0 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:58.099 Found net devices under 0000:31:00.1: cvl_0_1 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.849 ms 00:26:58.099 00:26:58.099 --- 10.0.0.2 ping statistics --- 00:26:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.099 rtt min/avg/max/mdev = 0.849/0.849/0.849/0.000 ms 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:26:58.099 00:26:58.099 --- 10.0.0.1 ping statistics --- 00:26:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.099 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.099 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1510072 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1510072 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1510072 ']' 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.100 14:12:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.100 [2024-07-15 14:12:55.741760] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:58.100 [2024-07-15 14:12:55.741823] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.100 [2024-07-15 14:12:55.822660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.100 [2024-07-15 14:12:55.896877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.100 [2024-07-15 14:12:55.896918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.100 [2024-07-15 14:12:55.896926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.100 [2024-07-15 14:12:55.896933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.100 [2024-07-15 14:12:55.896938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.100 [2024-07-15 14:12:55.897077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.100 [2024-07-15 14:12:55.897079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1510072 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:58.726 [2024-07-15 14:12:56.692718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.726 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:58.985 Malloc0 00:26:58.985 14:12:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:58.985 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.244 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.244 [2024-07-15 14:12:57.337700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.244 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.504 [2024-07-15 14:12:57.490054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1510450 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1510450 /var/tmp/bdevperf.sock 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1510450 ']' 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.504 14:12:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:00.444 14:12:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.444 14:12:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:00.444 14:12:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:00.444 14:12:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:00.704 Nvme0n1 00:27:00.705 14:12:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:00.965 Nvme0n1 00:27:01.225 14:12:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:01.225 14:12:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:03.138 14:13:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:03.138 14:13:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:03.397 14:13:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:03.397 14:13:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.782 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.043 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.043 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.043 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.043 14:13:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.043 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.043 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.043 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.043 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.305 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.305 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:05.305 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.305 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.565 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.565 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:05.565 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:05.565 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.825 14:13:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:06.788 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:06.788 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.788 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.788 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.049 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.049 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:07.049 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.049 14:13:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.049 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.049 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.049 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.049 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.310 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.310 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.310 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.310 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.573 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.834 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.834 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:07.834 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.095 14:13:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:08.095 14:13:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.481 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.742 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.003 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.003 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.003 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.003 14:13:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.263 14:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.263 14:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:10.263 14:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.263 14:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:10.523 14:13:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:11.464 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:11.464 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.464 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.464 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.725 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.986 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.986 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.986 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.986 14:13:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.246 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.507 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.507 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:12.507 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:12.768 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:12.768 14:13:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:14.155 14:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:14.156 14:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:14.156 14:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.156 14:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.156 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.417 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:14.678 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.678 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:14.678 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.678 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.940 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.940 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:14.940 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:14.940 14:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:15.211 14:13:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:16.158 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:16.158 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:16.158 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.158 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.419 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.680 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.680 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.680 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.680 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.941 14:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:17.202 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.202 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:17.202 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:17.202 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:17.464 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:17.725 14:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:18.668 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:18.668 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:18.668 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.668 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.929 14:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.189 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.189 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:19.189 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.189 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.450 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.710 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.710 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:19.710 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.969 14:13:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.969 14:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:20.906 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:20.906 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:21.182 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.182 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:21.182 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.183 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:21.183 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:21.183 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.441 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.701 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.701 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.701 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.701 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.962 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.962 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.962 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.962 14:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.962 14:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.962 14:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:21.962 14:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:22.222 14:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:22.482 14:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:23.423 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:23.423 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:23.423 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.423 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.684 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.944 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.944 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.944 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.944 14:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.945 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.945 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.945 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.945 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:24.205 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.205 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:24.205 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.205 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:24.466 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.466 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:24.466 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:24.466 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:24.726 14:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:25.681 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:25.681 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.681 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.681 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.943 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.943 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:25.943 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.943 14:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.205 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.466 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.466 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.466 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.466 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1510450 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1510450 ']' 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1510450 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510450 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510450' 00:27:26.728 killing process with pid 1510450 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1510450 00:27:26.728 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1510450 00:27:27.010 Connection closed with partial response: 00:27:27.011 00:27:27.011 00:27:27.011 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1510450 00:27:27.011 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:27.011 [2024-07-15 14:12:57.550387] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:27.011 [2024-07-15 14:12:57.550443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510450 ] 00:27:27.011 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.011 [2024-07-15 14:12:57.606394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.011 [2024-07-15 14:12:57.658233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.011 Running I/O for 90 seconds... 00:27:27.011 [2024-07-15 14:13:10.640145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.011 [2024-07-15 14:13:10.640945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.640985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.640990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.641002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.641007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.011 [2024-07-15 14:13:10.641018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.011 [2024-07-15 14:13:10.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.641992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.641998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.012 [2024-07-15 14:13:10.642014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.012 [2024-07-15 14:13:10.642352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.012 [2024-07-15 14:13:10.642357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.642681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.643708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.643981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.013 [2024-07-15 14:13:10.643989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.013 [2024-07-15 14:13:10.644114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.013 [2024-07-15 14:13:10.644124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.644239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.644993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.644998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.014 [2024-07-15 14:13:10.645477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.014 [2024-07-15 14:13:10.645503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.014 [2024-07-15 14:13:10.645508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.645983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.645988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.015 [2024-07-15 14:13:10.646594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.646604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.015 [2024-07-15 14:13:10.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.015 [2024-07-15 14:13:10.657384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.657991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.016 [2024-07-15 14:13:10.657996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.016 [2024-07-15 14:13:10.658188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.016 [2024-07-15 14:13:10.658200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.017 [2024-07-15 14:13:10.658506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.017 [2024-07-15 14:13:10.658850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.017 [2024-07-15 14:13:10.658856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.658871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.658886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.658903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.658992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.658997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.018 [2024-07-15 14:13:10.659013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.659463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.659470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.660288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.660300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.660312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.018 [2024-07-15 14:13:10.660318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.018 [2024-07-15 14:13:10.660329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.019 [2024-07-15 14:13:10.660367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.660425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.660430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.661988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.661998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.019 [2024-07-15 14:13:10.662343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.019 [2024-07-15 14:13:10.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.019 [2024-07-15 14:13:10.662371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.019 [2024-07-15 14:13:10.662379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.662694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.662704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.020 [2024-07-15 14:13:10.669408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.020 [2024-07-15 14:13:10.669958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.020 [2024-07-15 14:13:10.669963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.669973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.669979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.669989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.669994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.021 [2024-07-15 14:13:10.670294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.021 [2024-07-15 14:13:10.670620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.021 [2024-07-15 14:13:10.670626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.022 [2024-07-15 14:13:10.670840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.670983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.670989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.022 [2024-07-15 14:13:10.671924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.022 [2024-07-15 14:13:10.671929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.671939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.023 [2024-07-15 14:13:10.671945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.671955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.671960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.671971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.671977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.671988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.671994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.023 [2024-07-15 14:13:10.672200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.023 [2024-07-15 14:13:10.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.023 [2024-07-15 14:13:10.672470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.023 [2024-07-15 14:13:10.672480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.023 [2024-07-15 14:13:10.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.672985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.672996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.024 [2024-07-15 14:13:10.673256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.024 [2024-07-15 14:13:10.673678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.024 [2024-07-15 14:13:10.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.673988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.673994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.674219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.674230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.025 [2024-07-15 14:13:10.674237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.025 [2024-07-15 14:13:10.679393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.025 [2024-07-15 14:13:10.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.026 [2024-07-15 14:13:10.679841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.679984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.679994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.680000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.680010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.680034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.680044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.680060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.026 [2024-07-15 14:13:10.680066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.026 [2024-07-15 14:13:10.680077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.027 [2024-07-15 14:13:10.680615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.027 [2024-07-15 14:13:10.680642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.027 [2024-07-15 14:13:10.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.680987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.680997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.681975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.028 [2024-07-15 14:13:10.682071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.028 [2024-07-15 14:13:10.682220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.028 [2024-07-15 14:13:10.682230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.682564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.682569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.029 [2024-07-15 14:13:10.683977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.683988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.683994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.684004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.684009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.684020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.684026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.684036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.684041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.029 [2024-07-15 14:13:10.684052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.029 [2024-07-15 14:13:10.684057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.684236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.684990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.684995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.030 [2024-07-15 14:13:10.685309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.030 [2024-07-15 14:13:10.685402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.030 [2024-07-15 14:13:10.685407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.031 [2024-07-15 14:13:10.685423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.685951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.685957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.031 [2024-07-15 14:13:10.686324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.031 [2024-07-15 14:13:10.686403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.031 [2024-07-15 14:13:10.686414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.686949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.686957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.032 [2024-07-15 14:13:10.687609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.032 [2024-07-15 14:13:10.687914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.032 [2024-07-15 14:13:10.687924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.687930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.687941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.687946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.687957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.687963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.687974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.687979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.033 [2024-07-15 14:13:10.688524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.688741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.688746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.033 [2024-07-15 14:13:10.689541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.033 [2024-07-15 14:13:10.689548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.034 [2024-07-15 14:13:10.689876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.689985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.689991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.034 [2024-07-15 14:13:10.690518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.034 [2024-07-15 14:13:10.690523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.690727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.690810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.690816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.691718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.691723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.035 [2024-07-15 14:13:10.692638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.035 [2024-07-15 14:13:10.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.035 [2024-07-15 14:13:10.692773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.692886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.692897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.692902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.036 [2024-07-15 14:13:10.693284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.036 [2024-07-15 14:13:10.693783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.036 [2024-07-15 14:13:10.693793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.693972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.693978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.037 [2024-07-15 14:13:10.694253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.037 [2024-07-15 14:13:10.694754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.037 [2024-07-15 14:13:10.694761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.694955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.694961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.038 [2024-07-15 14:13:10.695229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.695913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.696229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.038 [2024-07-15 14:13:10.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.038 [2024-07-15 14:13:10.697108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.039 [2024-07-15 14:13:10.697768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.697842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.697848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.698157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.698165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.698177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.698182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.698193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.039 [2024-07-15 14:13:10.698198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.039 [2024-07-15 14:13:10.698208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.040 [2024-07-15 14:13:10.698728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.040 [2024-07-15 14:13:10.698921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.040 [2024-07-15 14:13:10.698926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.041 [2024-07-15 14:13:10.699678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.699987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.699997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.041 [2024-07-15 14:13:10.700318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.041 [2024-07-15 14:13:10.700323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.700626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.700631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.701563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.701591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.701609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.042 [2024-07-15 14:13:10.701922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.701985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.701991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:27.042 [2024-07-15 14:13:10.702141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.042 [2024-07-15 14:13:10.702147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.702164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.702181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.702198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.702993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.702998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.043 [2024-07-15 14:13:10.703177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.043 [2024-07-15 14:13:10.703325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:27.043 [2024-07-15 14:13:10.703340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.703981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.703998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.044 [2024-07-15 14:13:10.704115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.044 [2024-07-15 14:13:10.704407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.044 [2024-07-15 14:13:10.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.704989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.705012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.705018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.705035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.705043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.705060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.705066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.705084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.705090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:10.705126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:10.705133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.723484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.723518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.723551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.723557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.724958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.724975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.724993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.725004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.725009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.725024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.725035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:27.045 [2024-07-15 14:13:22.725040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:27.045 [2024-07-15 14:13:22.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.045 [2024-07-15 14:13:22.725056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:27.045 Received shutdown signal, test time was about 25.591160 seconds 00:27:27.045 00:27:27.045 Latency(us) 00:27:27.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.045 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:27.045 Verification LBA range: start 0x0 length 0x4000 00:27:27.045 Nvme0n1 : 25.59 10907.43 42.61 0.00 0.00 11716.21 291.84 3075822.93 00:27:27.045 =================================================================================================================== 00:27:27.045 Total : 10907.43 42.61 0.00 0.00 11716.21 291.84 3075822.93 00:27:27.045 14:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.045 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:27.045 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.332 rmmod nvme_tcp 00:27:27.332 rmmod nvme_fabrics 00:27:27.332 rmmod nvme_keyring 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1510072 ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1510072 ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510072' 00:27:27.332 killing process with pid 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1510072 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.332 14:13:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.896 14:13:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.896 00:27:29.896 real 0m39.805s 00:27:29.896 user 1m40.583s 00:27:29.896 sys 0m11.323s 00:27:29.896 14:13:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.896 14:13:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:29.896 ************************************ 00:27:29.896 END TEST nvmf_host_multipath_status 00:27:29.896 ************************************ 00:27:29.896 14:13:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:29.896 14:13:27 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:29.896 14:13:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:29.896 14:13:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.896 14:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:29.896 ************************************ 00:27:29.896 START TEST nvmf_discovery_remove_ifc 00:27:29.896 ************************************ 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:29.896 * Looking for test storage... 00:27:29.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:29.896 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.897 14:13:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:38.041 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:38.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:38.041 Found net devices under 0000:31:00.0: cvl_0_0 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:38.041 Found net devices under 0000:31:00.1: cvl_0_1 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:27:38.041 00:27:38.041 --- 10.0.0.2 ping statistics --- 00:27:38.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.041 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:27:38.041 00:27:38.041 --- 10.0.0.1 ping statistics --- 00:27:38.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.041 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:38.041 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1520479 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1520479 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1520479 ']' 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.042 14:13:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.042 [2024-07-15 14:13:35.833823] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:38.042 [2024-07-15 14:13:35.833887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.042 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.042 [2024-07-15 14:13:35.930288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.042 [2024-07-15 14:13:36.023023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.042 [2024-07-15 14:13:36.023076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.042 [2024-07-15 14:13:36.023085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.042 [2024-07-15 14:13:36.023092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.042 [2024-07-15 14:13:36.023098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.042 [2024-07-15 14:13:36.023121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.614 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.614 [2024-07-15 14:13:36.667280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.614 [2024-07-15 14:13:36.675465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:38.614 null0 00:27:38.614 [2024-07-15 14:13:36.707450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1520759 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1520759 /tmp/host.sock 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1520759 ']' 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:38.875 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.875 14:13:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.875 [2024-07-15 14:13:36.789914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:38.875 [2024-07-15 14:13:36.789988] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520759 ] 00:27:38.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.875 [2024-07-15 14:13:36.862731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.875 [2024-07-15 14:13:36.937189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.816 14:13:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.757 [2024-07-15 14:13:38.696824] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:40.757 [2024-07-15 14:13:38.696845] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:40.757 [2024-07-15 14:13:38.696858] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:40.757 [2024-07-15 14:13:38.785143] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:41.018 [2024-07-15 14:13:38.889410] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:41.018 [2024-07-15 14:13:38.889456] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:41.018 [2024-07-15 14:13:38.889477] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:41.018 [2024-07-15 14:13:38.889493] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:41.018 [2024-07-15 14:13:38.889512] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.018 [2024-07-15 14:13:38.894676] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15ca500 was disconnected and freed. delete nvme_qpair. 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:41.018 14:13:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:41.018 14:13:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.402 14:13:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.341 14:13:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.280 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.280 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.280 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.280 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.281 14:13:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.218 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.478 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:45.478 14:13:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.416 [2024-07-15 14:13:44.330224] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:46.416 [2024-07-15 14:13:44.330269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.416 [2024-07-15 14:13:44.330281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.416 [2024-07-15 14:13:44.330291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.416 [2024-07-15 14:13:44.330299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.416 [2024-07-15 14:13:44.330307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.416 [2024-07-15 14:13:44.330314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.416 [2024-07-15 14:13:44.330322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.416 [2024-07-15 14:13:44.330329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.416 [2024-07-15 14:13:44.330338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.416 [2024-07-15 14:13:44.330346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.416 [2024-07-15 14:13:44.330353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15910a0 is same with the state(5) to be set 00:27:46.416 [2024-07-15 14:13:44.340243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15910a0 (9): Bad file descriptor 00:27:46.416 [2024-07-15 14:13:44.350283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.416 14:13:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.352 [2024-07-15 14:13:45.368776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:47.352 [2024-07-15 14:13:45.368813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15910a0 with addr=10.0.0.2, port=4420 00:27:47.352 [2024-07-15 14:13:45.368824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15910a0 is same with the state(5) to be set 00:27:47.352 [2024-07-15 14:13:45.368846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15910a0 (9): Bad file descriptor 00:27:47.352 [2024-07-15 14:13:45.369210] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:47.352 [2024-07-15 14:13:45.369228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:47.352 [2024-07-15 14:13:45.369236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:47.352 [2024-07-15 14:13:45.369244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:47.352 [2024-07-15 14:13:45.369259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.352 [2024-07-15 14:13:45.369267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:47.352 14:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.352 14:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:47.352 14:13:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.290 [2024-07-15 14:13:46.371640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:48.290 [2024-07-15 14:13:46.371661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:48.290 [2024-07-15 14:13:46.371669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:48.290 [2024-07-15 14:13:46.371676] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:48.290 [2024-07-15 14:13:46.371689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.290 [2024-07-15 14:13:46.371707] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:48.290 [2024-07-15 14:13:46.371729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.290 [2024-07-15 14:13:46.371738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.290 [2024-07-15 14:13:46.371748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.290 [2024-07-15 14:13:46.371759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.290 [2024-07-15 14:13:46.371768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.290 [2024-07-15 14:13:46.371779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.290 [2024-07-15 14:13:46.371787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.290 [2024-07-15 14:13:46.371794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.290 [2024-07-15 14:13:46.371802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.290 [2024-07-15 14:13:46.371809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.290 [2024-07-15 14:13:46.371817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:48.290 [2024-07-15 14:13:46.372250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590520 (9): Bad file descriptor 00:27:48.290 [2024-07-15 14:13:46.373262] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:48.290 [2024-07-15 14:13:46.373273] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.290 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:48.550 14:13:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.491 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.775 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:49.775 14:13:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.347 [2024-07-15 14:13:48.390059] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:50.347 [2024-07-15 14:13:48.390079] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:50.347 [2024-07-15 14:13:48.390092] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:50.608 [2024-07-15 14:13:48.518496] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:50.608 [2024-07-15 14:13:48.621256] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:50.608 [2024-07-15 14:13:48.621296] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:50.608 [2024-07-15 14:13:48.621315] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:50.608 [2024-07-15 14:13:48.621328] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:50.608 [2024-07-15 14:13:48.621336] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:50.608 [2024-07-15 14:13:48.628405] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15d3cb0 was disconnected and freed. delete nvme_qpair. 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1520759 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1520759 ']' 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1520759 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.608 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520759 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520759' 00:27:50.869 killing process with pid 1520759 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1520759 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1520759 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.869 rmmod nvme_tcp 00:27:50.869 rmmod nvme_fabrics 00:27:50.869 rmmod nvme_keyring 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1520479 ']' 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1520479 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1520479 ']' 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1520479 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.869 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520479 00:27:51.130 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:51.130 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:51.130 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520479' 00:27:51.130 killing process with pid 1520479 00:27:51.130 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1520479 00:27:51.130 14:13:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1520479 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.130 14:13:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.081 14:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.081 00:27:53.081 real 0m23.656s 00:27:53.081 user 0m27.143s 00:27:53.081 sys 0m7.250s 00:27:53.081 14:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.081 14:13:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.081 ************************************ 00:27:53.081 END TEST nvmf_discovery_remove_ifc 00:27:53.081 ************************************ 00:27:53.342 14:13:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:53.343 14:13:51 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:53.343 14:13:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:53.343 14:13:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.343 14:13:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.343 ************************************ 00:27:53.343 START TEST nvmf_identify_kernel_target 00:27:53.343 ************************************ 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:53.343 * Looking for test storage... 00:27:53.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.343 14:13:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:01.486 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:01.486 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.486 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:01.487 Found net devices under 0000:31:00.0: cvl_0_0 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:01.487 Found net devices under 0000:31:00.1: cvl_0_1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:28:01.487 00:28:01.487 --- 10.0.0.2 ping statistics --- 00:28:01.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.487 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:28:01.487 00:28:01.487 --- 10.0.0.1 ping statistics --- 00:28:01.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.487 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:01.487 14:13:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:05.696 Waiting for block devices as requested 00:28:05.696 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.696 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:05.956 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:05.956 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.956 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:06.217 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:06.217 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:06.217 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:06.217 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:06.478 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:06.478 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:06.478 No valid GPT data, bailing 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:06.478 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:28:06.478 00:28:06.478 Discovery Log Number of Records 2, Generation counter 2 00:28:06.479 =====Discovery Log Entry 0====== 00:28:06.479 trtype: tcp 00:28:06.479 adrfam: ipv4 00:28:06.479 subtype: current discovery subsystem 00:28:06.479 treq: not specified, sq flow control disable supported 00:28:06.479 portid: 1 00:28:06.479 trsvcid: 4420 00:28:06.479 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:06.479 traddr: 10.0.0.1 00:28:06.479 eflags: none 00:28:06.479 sectype: none 00:28:06.479 =====Discovery Log Entry 1====== 00:28:06.479 trtype: tcp 00:28:06.479 adrfam: ipv4 00:28:06.479 subtype: nvme subsystem 00:28:06.479 treq: not specified, sq flow control disable supported 00:28:06.479 portid: 1 00:28:06.479 trsvcid: 4420 00:28:06.479 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:06.479 traddr: 10.0.0.1 00:28:06.479 eflags: none 00:28:06.479 sectype: none 00:28:06.479 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:06.479 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:06.740 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.740 ===================================================== 00:28:06.740 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:06.740 ===================================================== 00:28:06.740 Controller Capabilities/Features 00:28:06.740 ================================ 00:28:06.740 Vendor ID: 0000 00:28:06.740 Subsystem Vendor ID: 0000 00:28:06.740 Serial Number: b2697822425e0e1d8dce 00:28:06.740 Model Number: Linux 00:28:06.740 Firmware Version: 6.7.0-68 00:28:06.740 Recommended Arb Burst: 0 00:28:06.740 IEEE OUI Identifier: 00 00 00 00:28:06.740 Multi-path I/O 00:28:06.740 May have multiple subsystem ports: No 00:28:06.740 May have multiple controllers: No 00:28:06.740 Associated with SR-IOV VF: No 00:28:06.740 Max Data Transfer Size: Unlimited 00:28:06.740 Max Number of Namespaces: 0 00:28:06.740 Max Number of I/O Queues: 1024 00:28:06.740 NVMe Specification Version (VS): 1.3 00:28:06.740 NVMe Specification Version (Identify): 1.3 00:28:06.740 Maximum Queue Entries: 1024 00:28:06.740 Contiguous Queues Required: No 00:28:06.740 Arbitration Mechanisms Supported 00:28:06.740 Weighted Round Robin: Not Supported 00:28:06.740 Vendor Specific: Not Supported 00:28:06.740 Reset Timeout: 7500 ms 00:28:06.740 Doorbell Stride: 4 bytes 00:28:06.740 NVM Subsystem Reset: Not Supported 00:28:06.740 Command Sets Supported 00:28:06.740 NVM Command Set: Supported 00:28:06.740 Boot Partition: Not Supported 00:28:06.740 Memory Page Size Minimum: 4096 bytes 00:28:06.740 Memory Page Size Maximum: 4096 bytes 00:28:06.740 Persistent Memory Region: Not Supported 00:28:06.740 Optional Asynchronous Events Supported 00:28:06.740 Namespace Attribute Notices: Not Supported 00:28:06.740 Firmware Activation Notices: Not Supported 00:28:06.740 ANA Change Notices: Not Supported 00:28:06.740 PLE Aggregate Log Change Notices: Not Supported 00:28:06.740 LBA Status Info Alert Notices: Not Supported 00:28:06.740 EGE Aggregate Log Change Notices: Not Supported 00:28:06.740 Normal NVM Subsystem Shutdown event: Not Supported 00:28:06.740 Zone Descriptor Change Notices: Not Supported 00:28:06.740 Discovery Log Change Notices: Supported 00:28:06.740 Controller Attributes 00:28:06.740 128-bit Host Identifier: Not Supported 00:28:06.740 Non-Operational Permissive Mode: Not Supported 00:28:06.740 NVM Sets: Not Supported 00:28:06.740 Read Recovery Levels: Not Supported 00:28:06.740 Endurance Groups: Not Supported 00:28:06.740 Predictable Latency Mode: Not Supported 00:28:06.740 Traffic Based Keep ALive: Not Supported 00:28:06.740 Namespace Granularity: Not Supported 00:28:06.740 SQ Associations: Not Supported 00:28:06.740 UUID List: Not Supported 00:28:06.740 Multi-Domain Subsystem: Not Supported 00:28:06.740 Fixed Capacity Management: Not Supported 00:28:06.740 Variable Capacity Management: Not Supported 00:28:06.740 Delete Endurance Group: Not Supported 00:28:06.740 Delete NVM Set: Not Supported 00:28:06.740 Extended LBA Formats Supported: Not Supported 00:28:06.740 Flexible Data Placement Supported: Not Supported 00:28:06.740 00:28:06.740 Controller Memory Buffer Support 00:28:06.740 ================================ 00:28:06.740 Supported: No 00:28:06.740 00:28:06.740 Persistent Memory Region Support 00:28:06.740 ================================ 00:28:06.740 Supported: No 00:28:06.740 00:28:06.740 Admin Command Set Attributes 00:28:06.740 ============================ 00:28:06.740 Security Send/Receive: Not Supported 00:28:06.740 Format NVM: Not Supported 00:28:06.741 Firmware Activate/Download: Not Supported 00:28:06.741 Namespace Management: Not Supported 00:28:06.741 Device Self-Test: Not Supported 00:28:06.741 Directives: Not Supported 00:28:06.741 NVMe-MI: Not Supported 00:28:06.741 Virtualization Management: Not Supported 00:28:06.741 Doorbell Buffer Config: Not Supported 00:28:06.741 Get LBA Status Capability: Not Supported 00:28:06.741 Command & Feature Lockdown Capability: Not Supported 00:28:06.741 Abort Command Limit: 1 00:28:06.741 Async Event Request Limit: 1 00:28:06.741 Number of Firmware Slots: N/A 00:28:06.741 Firmware Slot 1 Read-Only: N/A 00:28:06.741 Firmware Activation Without Reset: N/A 00:28:06.741 Multiple Update Detection Support: N/A 00:28:06.741 Firmware Update Granularity: No Information Provided 00:28:06.741 Per-Namespace SMART Log: No 00:28:06.741 Asymmetric Namespace Access Log Page: Not Supported 00:28:06.741 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:06.741 Command Effects Log Page: Not Supported 00:28:06.741 Get Log Page Extended Data: Supported 00:28:06.741 Telemetry Log Pages: Not Supported 00:28:06.741 Persistent Event Log Pages: Not Supported 00:28:06.741 Supported Log Pages Log Page: May Support 00:28:06.741 Commands Supported & Effects Log Page: Not Supported 00:28:06.741 Feature Identifiers & Effects Log Page:May Support 00:28:06.741 NVMe-MI Commands & Effects Log Page: May Support 00:28:06.741 Data Area 4 for Telemetry Log: Not Supported 00:28:06.741 Error Log Page Entries Supported: 1 00:28:06.741 Keep Alive: Not Supported 00:28:06.741 00:28:06.741 NVM Command Set Attributes 00:28:06.741 ========================== 00:28:06.741 Submission Queue Entry Size 00:28:06.741 Max: 1 00:28:06.741 Min: 1 00:28:06.741 Completion Queue Entry Size 00:28:06.741 Max: 1 00:28:06.741 Min: 1 00:28:06.741 Number of Namespaces: 0 00:28:06.741 Compare Command: Not Supported 00:28:06.741 Write Uncorrectable Command: Not Supported 00:28:06.741 Dataset Management Command: Not Supported 00:28:06.741 Write Zeroes Command: Not Supported 00:28:06.741 Set Features Save Field: Not Supported 00:28:06.741 Reservations: Not Supported 00:28:06.741 Timestamp: Not Supported 00:28:06.741 Copy: Not Supported 00:28:06.741 Volatile Write Cache: Not Present 00:28:06.741 Atomic Write Unit (Normal): 1 00:28:06.741 Atomic Write Unit (PFail): 1 00:28:06.741 Atomic Compare & Write Unit: 1 00:28:06.741 Fused Compare & Write: Not Supported 00:28:06.741 Scatter-Gather List 00:28:06.741 SGL Command Set: Supported 00:28:06.741 SGL Keyed: Not Supported 00:28:06.741 SGL Bit Bucket Descriptor: Not Supported 00:28:06.741 SGL Metadata Pointer: Not Supported 00:28:06.741 Oversized SGL: Not Supported 00:28:06.741 SGL Metadata Address: Not Supported 00:28:06.741 SGL Offset: Supported 00:28:06.741 Transport SGL Data Block: Not Supported 00:28:06.741 Replay Protected Memory Block: Not Supported 00:28:06.741 00:28:06.741 Firmware Slot Information 00:28:06.741 ========================= 00:28:06.741 Active slot: 0 00:28:06.741 00:28:06.741 00:28:06.741 Error Log 00:28:06.741 ========= 00:28:06.741 00:28:06.741 Active Namespaces 00:28:06.741 ================= 00:28:06.741 Discovery Log Page 00:28:06.741 ================== 00:28:06.741 Generation Counter: 2 00:28:06.741 Number of Records: 2 00:28:06.741 Record Format: 0 00:28:06.741 00:28:06.741 Discovery Log Entry 0 00:28:06.741 ---------------------- 00:28:06.741 Transport Type: 3 (TCP) 00:28:06.741 Address Family: 1 (IPv4) 00:28:06.741 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:06.741 Entry Flags: 00:28:06.741 Duplicate Returned Information: 0 00:28:06.741 Explicit Persistent Connection Support for Discovery: 0 00:28:06.741 Transport Requirements: 00:28:06.741 Secure Channel: Not Specified 00:28:06.741 Port ID: 1 (0x0001) 00:28:06.741 Controller ID: 65535 (0xffff) 00:28:06.741 Admin Max SQ Size: 32 00:28:06.741 Transport Service Identifier: 4420 00:28:06.741 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:06.741 Transport Address: 10.0.0.1 00:28:06.741 Discovery Log Entry 1 00:28:06.741 ---------------------- 00:28:06.741 Transport Type: 3 (TCP) 00:28:06.741 Address Family: 1 (IPv4) 00:28:06.741 Subsystem Type: 2 (NVM Subsystem) 00:28:06.741 Entry Flags: 00:28:06.741 Duplicate Returned Information: 0 00:28:06.741 Explicit Persistent Connection Support for Discovery: 0 00:28:06.741 Transport Requirements: 00:28:06.741 Secure Channel: Not Specified 00:28:06.741 Port ID: 1 (0x0001) 00:28:06.741 Controller ID: 65535 (0xffff) 00:28:06.741 Admin Max SQ Size: 32 00:28:06.741 Transport Service Identifier: 4420 00:28:06.741 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:06.741 Transport Address: 10.0.0.1 00:28:06.741 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:06.741 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.741 get_feature(0x01) failed 00:28:06.741 get_feature(0x02) failed 00:28:06.741 get_feature(0x04) failed 00:28:06.741 ===================================================== 00:28:06.741 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:06.741 ===================================================== 00:28:06.741 Controller Capabilities/Features 00:28:06.741 ================================ 00:28:06.741 Vendor ID: 0000 00:28:06.741 Subsystem Vendor ID: 0000 00:28:06.741 Serial Number: 8e4fcc6c446551581120 00:28:06.741 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:06.741 Firmware Version: 6.7.0-68 00:28:06.741 Recommended Arb Burst: 6 00:28:06.741 IEEE OUI Identifier: 00 00 00 00:28:06.741 Multi-path I/O 00:28:06.741 May have multiple subsystem ports: Yes 00:28:06.741 May have multiple controllers: Yes 00:28:06.741 Associated with SR-IOV VF: No 00:28:06.741 Max Data Transfer Size: Unlimited 00:28:06.741 Max Number of Namespaces: 1024 00:28:06.741 Max Number of I/O Queues: 128 00:28:06.741 NVMe Specification Version (VS): 1.3 00:28:06.741 NVMe Specification Version (Identify): 1.3 00:28:06.741 Maximum Queue Entries: 1024 00:28:06.741 Contiguous Queues Required: No 00:28:06.741 Arbitration Mechanisms Supported 00:28:06.741 Weighted Round Robin: Not Supported 00:28:06.741 Vendor Specific: Not Supported 00:28:06.741 Reset Timeout: 7500 ms 00:28:06.741 Doorbell Stride: 4 bytes 00:28:06.741 NVM Subsystem Reset: Not Supported 00:28:06.741 Command Sets Supported 00:28:06.741 NVM Command Set: Supported 00:28:06.741 Boot Partition: Not Supported 00:28:06.741 Memory Page Size Minimum: 4096 bytes 00:28:06.741 Memory Page Size Maximum: 4096 bytes 00:28:06.741 Persistent Memory Region: Not Supported 00:28:06.741 Optional Asynchronous Events Supported 00:28:06.741 Namespace Attribute Notices: Supported 00:28:06.741 Firmware Activation Notices: Not Supported 00:28:06.741 ANA Change Notices: Supported 00:28:06.741 PLE Aggregate Log Change Notices: Not Supported 00:28:06.741 LBA Status Info Alert Notices: Not Supported 00:28:06.741 EGE Aggregate Log Change Notices: Not Supported 00:28:06.741 Normal NVM Subsystem Shutdown event: Not Supported 00:28:06.741 Zone Descriptor Change Notices: Not Supported 00:28:06.741 Discovery Log Change Notices: Not Supported 00:28:06.741 Controller Attributes 00:28:06.741 128-bit Host Identifier: Supported 00:28:06.741 Non-Operational Permissive Mode: Not Supported 00:28:06.741 NVM Sets: Not Supported 00:28:06.741 Read Recovery Levels: Not Supported 00:28:06.741 Endurance Groups: Not Supported 00:28:06.741 Predictable Latency Mode: Not Supported 00:28:06.741 Traffic Based Keep ALive: Supported 00:28:06.741 Namespace Granularity: Not Supported 00:28:06.741 SQ Associations: Not Supported 00:28:06.741 UUID List: Not Supported 00:28:06.741 Multi-Domain Subsystem: Not Supported 00:28:06.741 Fixed Capacity Management: Not Supported 00:28:06.741 Variable Capacity Management: Not Supported 00:28:06.741 Delete Endurance Group: Not Supported 00:28:06.741 Delete NVM Set: Not Supported 00:28:06.741 Extended LBA Formats Supported: Not Supported 00:28:06.741 Flexible Data Placement Supported: Not Supported 00:28:06.741 00:28:06.741 Controller Memory Buffer Support 00:28:06.741 ================================ 00:28:06.741 Supported: No 00:28:06.741 00:28:06.741 Persistent Memory Region Support 00:28:06.741 ================================ 00:28:06.741 Supported: No 00:28:06.741 00:28:06.741 Admin Command Set Attributes 00:28:06.741 ============================ 00:28:06.741 Security Send/Receive: Not Supported 00:28:06.741 Format NVM: Not Supported 00:28:06.741 Firmware Activate/Download: Not Supported 00:28:06.741 Namespace Management: Not Supported 00:28:06.741 Device Self-Test: Not Supported 00:28:06.741 Directives: Not Supported 00:28:06.741 NVMe-MI: Not Supported 00:28:06.741 Virtualization Management: Not Supported 00:28:06.741 Doorbell Buffer Config: Not Supported 00:28:06.741 Get LBA Status Capability: Not Supported 00:28:06.741 Command & Feature Lockdown Capability: Not Supported 00:28:06.741 Abort Command Limit: 4 00:28:06.741 Async Event Request Limit: 4 00:28:06.741 Number of Firmware Slots: N/A 00:28:06.741 Firmware Slot 1 Read-Only: N/A 00:28:06.741 Firmware Activation Without Reset: N/A 00:28:06.741 Multiple Update Detection Support: N/A 00:28:06.741 Firmware Update Granularity: No Information Provided 00:28:06.741 Per-Namespace SMART Log: Yes 00:28:06.741 Asymmetric Namespace Access Log Page: Supported 00:28:06.741 ANA Transition Time : 10 sec 00:28:06.741 00:28:06.741 Asymmetric Namespace Access Capabilities 00:28:06.741 ANA Optimized State : Supported 00:28:06.741 ANA Non-Optimized State : Supported 00:28:06.741 ANA Inaccessible State : Supported 00:28:06.741 ANA Persistent Loss State : Supported 00:28:06.741 ANA Change State : Supported 00:28:06.742 ANAGRPID is not changed : No 00:28:06.742 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:06.742 00:28:06.742 ANA Group Identifier Maximum : 128 00:28:06.742 Number of ANA Group Identifiers : 128 00:28:06.742 Max Number of Allowed Namespaces : 1024 00:28:06.742 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:06.742 Command Effects Log Page: Supported 00:28:06.742 Get Log Page Extended Data: Supported 00:28:06.742 Telemetry Log Pages: Not Supported 00:28:06.742 Persistent Event Log Pages: Not Supported 00:28:06.742 Supported Log Pages Log Page: May Support 00:28:06.742 Commands Supported & Effects Log Page: Not Supported 00:28:06.742 Feature Identifiers & Effects Log Page:May Support 00:28:06.742 NVMe-MI Commands & Effects Log Page: May Support 00:28:06.742 Data Area 4 for Telemetry Log: Not Supported 00:28:06.742 Error Log Page Entries Supported: 128 00:28:06.742 Keep Alive: Supported 00:28:06.742 Keep Alive Granularity: 1000 ms 00:28:06.742 00:28:06.742 NVM Command Set Attributes 00:28:06.742 ========================== 00:28:06.742 Submission Queue Entry Size 00:28:06.742 Max: 64 00:28:06.742 Min: 64 00:28:06.742 Completion Queue Entry Size 00:28:06.742 Max: 16 00:28:06.742 Min: 16 00:28:06.742 Number of Namespaces: 1024 00:28:06.742 Compare Command: Not Supported 00:28:06.742 Write Uncorrectable Command: Not Supported 00:28:06.742 Dataset Management Command: Supported 00:28:06.742 Write Zeroes Command: Supported 00:28:06.742 Set Features Save Field: Not Supported 00:28:06.742 Reservations: Not Supported 00:28:06.742 Timestamp: Not Supported 00:28:06.742 Copy: Not Supported 00:28:06.742 Volatile Write Cache: Present 00:28:06.742 Atomic Write Unit (Normal): 1 00:28:06.742 Atomic Write Unit (PFail): 1 00:28:06.742 Atomic Compare & Write Unit: 1 00:28:06.742 Fused Compare & Write: Not Supported 00:28:06.742 Scatter-Gather List 00:28:06.742 SGL Command Set: Supported 00:28:06.742 SGL Keyed: Not Supported 00:28:06.742 SGL Bit Bucket Descriptor: Not Supported 00:28:06.742 SGL Metadata Pointer: Not Supported 00:28:06.742 Oversized SGL: Not Supported 00:28:06.742 SGL Metadata Address: Not Supported 00:28:06.742 SGL Offset: Supported 00:28:06.742 Transport SGL Data Block: Not Supported 00:28:06.742 Replay Protected Memory Block: Not Supported 00:28:06.742 00:28:06.742 Firmware Slot Information 00:28:06.742 ========================= 00:28:06.742 Active slot: 0 00:28:06.742 00:28:06.742 Asymmetric Namespace Access 00:28:06.742 =========================== 00:28:06.742 Change Count : 0 00:28:06.742 Number of ANA Group Descriptors : 1 00:28:06.742 ANA Group Descriptor : 0 00:28:06.742 ANA Group ID : 1 00:28:06.742 Number of NSID Values : 1 00:28:06.742 Change Count : 0 00:28:06.742 ANA State : 1 00:28:06.742 Namespace Identifier : 1 00:28:06.742 00:28:06.742 Commands Supported and Effects 00:28:06.742 ============================== 00:28:06.742 Admin Commands 00:28:06.742 -------------- 00:28:06.742 Get Log Page (02h): Supported 00:28:06.742 Identify (06h): Supported 00:28:06.742 Abort (08h): Supported 00:28:06.742 Set Features (09h): Supported 00:28:06.742 Get Features (0Ah): Supported 00:28:06.742 Asynchronous Event Request (0Ch): Supported 00:28:06.742 Keep Alive (18h): Supported 00:28:06.742 I/O Commands 00:28:06.742 ------------ 00:28:06.742 Flush (00h): Supported 00:28:06.742 Write (01h): Supported LBA-Change 00:28:06.742 Read (02h): Supported 00:28:06.742 Write Zeroes (08h): Supported LBA-Change 00:28:06.742 Dataset Management (09h): Supported 00:28:06.742 00:28:06.742 Error Log 00:28:06.742 ========= 00:28:06.742 Entry: 0 00:28:06.742 Error Count: 0x3 00:28:06.742 Submission Queue Id: 0x0 00:28:06.742 Command Id: 0x5 00:28:06.742 Phase Bit: 0 00:28:06.742 Status Code: 0x2 00:28:06.742 Status Code Type: 0x0 00:28:06.742 Do Not Retry: 1 00:28:06.742 Error Location: 0x28 00:28:06.742 LBA: 0x0 00:28:06.742 Namespace: 0x0 00:28:06.742 Vendor Log Page: 0x0 00:28:06.742 ----------- 00:28:06.742 Entry: 1 00:28:06.742 Error Count: 0x2 00:28:06.742 Submission Queue Id: 0x0 00:28:06.742 Command Id: 0x5 00:28:06.742 Phase Bit: 0 00:28:06.742 Status Code: 0x2 00:28:06.742 Status Code Type: 0x0 00:28:06.742 Do Not Retry: 1 00:28:06.742 Error Location: 0x28 00:28:06.742 LBA: 0x0 00:28:06.742 Namespace: 0x0 00:28:06.742 Vendor Log Page: 0x0 00:28:06.742 ----------- 00:28:06.742 Entry: 2 00:28:06.742 Error Count: 0x1 00:28:06.742 Submission Queue Id: 0x0 00:28:06.742 Command Id: 0x4 00:28:06.742 Phase Bit: 0 00:28:06.742 Status Code: 0x2 00:28:06.742 Status Code Type: 0x0 00:28:06.742 Do Not Retry: 1 00:28:06.742 Error Location: 0x28 00:28:06.742 LBA: 0x0 00:28:06.742 Namespace: 0x0 00:28:06.742 Vendor Log Page: 0x0 00:28:06.742 00:28:06.742 Number of Queues 00:28:06.742 ================ 00:28:06.742 Number of I/O Submission Queues: 128 00:28:06.742 Number of I/O Completion Queues: 128 00:28:06.742 00:28:06.742 ZNS Specific Controller Data 00:28:06.742 ============================ 00:28:06.742 Zone Append Size Limit: 0 00:28:06.742 00:28:06.742 00:28:06.742 Active Namespaces 00:28:06.742 ================= 00:28:06.742 get_feature(0x05) failed 00:28:06.742 Namespace ID:1 00:28:06.742 Command Set Identifier: NVM (00h) 00:28:06.742 Deallocate: Supported 00:28:06.742 Deallocated/Unwritten Error: Not Supported 00:28:06.742 Deallocated Read Value: Unknown 00:28:06.742 Deallocate in Write Zeroes: Not Supported 00:28:06.742 Deallocated Guard Field: 0xFFFF 00:28:06.742 Flush: Supported 00:28:06.742 Reservation: Not Supported 00:28:06.742 Namespace Sharing Capabilities: Multiple Controllers 00:28:06.742 Size (in LBAs): 3750748848 (1788GiB) 00:28:06.742 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:06.742 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:06.742 UUID: 61a13715-4dcb-4354-99a4-291508e57b2b 00:28:06.742 Thin Provisioning: Not Supported 00:28:06.742 Per-NS Atomic Units: Yes 00:28:06.742 Atomic Write Unit (Normal): 8 00:28:06.742 Atomic Write Unit (PFail): 8 00:28:06.742 Preferred Write Granularity: 8 00:28:06.742 Atomic Compare & Write Unit: 8 00:28:06.742 Atomic Boundary Size (Normal): 0 00:28:06.742 Atomic Boundary Size (PFail): 0 00:28:06.742 Atomic Boundary Offset: 0 00:28:06.742 NGUID/EUI64 Never Reused: No 00:28:06.742 ANA group ID: 1 00:28:06.742 Namespace Write Protected: No 00:28:06.742 Number of LBA Formats: 1 00:28:06.742 Current LBA Format: LBA Format #00 00:28:06.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:06.742 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.742 rmmod nvme_tcp 00:28:06.742 rmmod nvme_fabrics 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.742 14:14:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:09.282 14:14:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:12.578 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:12.578 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:12.839 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:12.839 00:28:12.839 real 0m19.578s 00:28:12.839 user 0m5.301s 00:28:12.839 sys 0m11.346s 00:28:12.839 14:14:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.839 14:14:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.839 ************************************ 00:28:12.839 END TEST nvmf_identify_kernel_target 00:28:12.839 ************************************ 00:28:12.839 14:14:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:12.839 14:14:10 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:12.839 14:14:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:12.839 14:14:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.839 14:14:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.839 ************************************ 00:28:12.839 START TEST nvmf_auth_host 00:28:12.839 ************************************ 00:28:12.839 14:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:13.100 * Looking for test storage... 00:28:13.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.100 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.101 14:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:21.239 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.239 14:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:21.239 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:21.239 Found net devices under 0000:31:00.0: cvl_0_0 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.239 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:21.240 Found net devices under 0000:31:00.1: cvl_0_1 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:28:21.240 00:28:21.240 --- 10.0.0.2 ping statistics --- 00:28:21.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.240 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:21.240 00:28:21.240 --- 10.0.0.1 ping statistics --- 00:28:21.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.240 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1536183 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1536183 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1536183 ']' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.240 14:14:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:22.183 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52fbb4ec136ea5abc4a1db97d8925929 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BOG 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52fbb4ec136ea5abc4a1db97d8925929 0 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52fbb4ec136ea5abc4a1db97d8925929 0 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52fbb4ec136ea5abc4a1db97d8925929 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BOG 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BOG 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BOG 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52c98e3541b32b77f7708c904eb55d002e2f6848199abd7d10c7909a126c1aa7 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XWz 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52c98e3541b32b77f7708c904eb55d002e2f6848199abd7d10c7909a126c1aa7 3 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52c98e3541b32b77f7708c904eb55d002e2f6848199abd7d10c7909a126c1aa7 3 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52c98e3541b32b77f7708c904eb55d002e2f6848199abd7d10c7909a126c1aa7 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:22.184 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XWz 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XWz 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XWz 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb27cfa936a3e487fec474c7acb1df84984f4155af4b6ee5 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Lc0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb27cfa936a3e487fec474c7acb1df84984f4155af4b6ee5 0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb27cfa936a3e487fec474c7acb1df84984f4155af4b6ee5 0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb27cfa936a3e487fec474c7acb1df84984f4155af4b6ee5 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Lc0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Lc0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Lc0 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d022d0024b0ed3579b2d3930a3a8d5907cb24ce54a7007a2 00:28:22.445 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4IK 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d022d0024b0ed3579b2d3930a3a8d5907cb24ce54a7007a2 2 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d022d0024b0ed3579b2d3930a3a8d5907cb24ce54a7007a2 2 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d022d0024b0ed3579b2d3930a3a8d5907cb24ce54a7007a2 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4IK 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4IK 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4IK 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6a07b767596fe9349d3236c83d9c20cf 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Us5 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6a07b767596fe9349d3236c83d9c20cf 1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6a07b767596fe9349d3236c83d9c20cf 1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6a07b767596fe9349d3236c83d9c20cf 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Us5 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Us5 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Us5 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96b16ae697961329a8dd37354ba8f89e 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.agS 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96b16ae697961329a8dd37354ba8f89e 1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96b16ae697961329a8dd37354ba8f89e 1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96b16ae697961329a8dd37354ba8f89e 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:22.446 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.agS 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.agS 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.agS 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4bee6bf613370ad6e916c7e93f285cad0d3d5f297b500caf 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WNm 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4bee6bf613370ad6e916c7e93f285cad0d3d5f297b500caf 2 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4bee6bf613370ad6e916c7e93f285cad0d3d5f297b500caf 2 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4bee6bf613370ad6e916c7e93f285cad0d3d5f297b500caf 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WNm 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WNm 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WNm 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.707 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83c94405e8d83667133ce7ffcabfc06d 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3gC 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83c94405e8d83667133ce7ffcabfc06d 0 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83c94405e8d83667133ce7ffcabfc06d 0 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83c94405e8d83667133ce7ffcabfc06d 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3gC 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3gC 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3gC 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8eaf3ef49a17a0b62cb9eff66ac6223facd366e4822e7dca20699573b865e99 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kmp 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8eaf3ef49a17a0b62cb9eff66ac6223facd366e4822e7dca20699573b865e99 3 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8eaf3ef49a17a0b62cb9eff66ac6223facd366e4822e7dca20699573b865e99 3 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8eaf3ef49a17a0b62cb9eff66ac6223facd366e4822e7dca20699573b865e99 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kmp 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kmp 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kmp 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1536183 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1536183 ']' 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.708 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BOG 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XWz ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XWz 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Lc0 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4IK ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4IK 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Us5 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.agS ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.agS 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WNm 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3gC ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3gC 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kmp 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:22.969 14:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:27.176 Waiting for block devices as requested 00:28:27.177 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:27.177 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:27.437 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:27.437 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:27.437 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:27.699 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:27.699 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:27.699 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:27.699 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:27.959 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:27.959 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:28.530 14:14:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:28.791 No valid GPT data, bailing 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:28:28.791 00:28:28.791 Discovery Log Number of Records 2, Generation counter 2 00:28:28.791 =====Discovery Log Entry 0====== 00:28:28.791 trtype: tcp 00:28:28.791 adrfam: ipv4 00:28:28.791 subtype: current discovery subsystem 00:28:28.791 treq: not specified, sq flow control disable supported 00:28:28.791 portid: 1 00:28:28.791 trsvcid: 4420 00:28:28.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:28.791 traddr: 10.0.0.1 00:28:28.791 eflags: none 00:28:28.791 sectype: none 00:28:28.791 =====Discovery Log Entry 1====== 00:28:28.791 trtype: tcp 00:28:28.791 adrfam: ipv4 00:28:28.791 subtype: nvme subsystem 00:28:28.791 treq: not specified, sq flow control disable supported 00:28:28.791 portid: 1 00:28:28.791 trsvcid: 4420 00:28:28.791 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:28.791 traddr: 10.0.0.1 00:28:28.791 eflags: none 00:28:28.791 sectype: none 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.791 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.792 nvme0n1 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.792 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:29.054 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.055 14:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.055 nvme0n1 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.055 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.317 nvme0n1 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.317 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.578 nvme0n1 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.578 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.839 nvme0n1 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.839 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.100 nvme0n1 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.100 14:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.100 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.101 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.363 nvme0n1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.363 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.624 nvme0n1 00:28:30.624 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.625 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 nvme0n1 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 14:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.148 nvme0n1 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.148 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.408 nvme0n1 00:28:31.408 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.408 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.408 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.409 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.669 nvme0n1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.669 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.928 nvme0n1 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.928 14:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:31.928 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.929 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.188 nvme0n1 00:28:32.188 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.188 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.188 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.188 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.188 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.448 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.730 nvme0n1 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.730 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.731 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.066 nvme0n1 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.066 14:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.066 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.637 nvme0n1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.637 14:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 nvme0n1 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.208 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.468 nvme0n1 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.468 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.728 14:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.729 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.729 14:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.988 nvme0n1 00:28:34.988 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.988 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.988 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.988 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.988 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.258 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.258 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.259 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.519 nvme0n1 00:28:35.519 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.520 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.520 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.520 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.520 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.779 14:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.348 nvme0n1 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.348 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.608 14:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.178 nvme0n1 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.178 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.438 14:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.007 nvme0n1 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.007 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.267 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.268 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.268 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.268 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.268 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.839 nvme0n1 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.839 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.100 14:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.671 nvme0n1 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.671 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.672 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.932 nvme0n1 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.932 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.933 14:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.194 nvme0n1 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.194 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.454 nvme0n1 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 nvme0n1 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.715 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.716 nvme0n1 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.716 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.976 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.977 14:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.977 14:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.977 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.977 14:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.977 nvme0n1 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.977 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.237 nvme0n1 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.237 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 nvme0n1 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.780 nvme0n1 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.780 14:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.781 14:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.781 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.781 14:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.041 nvme0n1 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.041 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.042 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.042 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.302 nvme0n1 00:28:42.302 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.302 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.302 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.302 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.302 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:42.563 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.564 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.825 nvme0n1 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.825 14:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 nvme0n1 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.086 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.347 nvme0n1 00:28:43.347 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.608 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.905 nvme0n1 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.905 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.906 14:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.168 nvme0n1 00:28:44.168 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.168 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.168 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.168 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.168 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:44.429 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.430 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.691 nvme0n1 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.691 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.953 14:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.218 nvme0n1 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.219 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.483 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.743 nvme0n1 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.743 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.003 14:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.264 nvme0n1 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.264 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.524 14:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.096 nvme0n1 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.096 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.356 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.929 nvme0n1 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.929 14:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.929 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.190 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.190 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.191 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.764 nvme0n1 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.764 14:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.708 nvme0n1 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.708 14:14:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 nvme0n1 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 nvme0n1 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.653 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.654 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.916 nvme0n1 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.916 14:14:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.179 nvme0n1 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.179 nvme0n1 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.179 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:51.440 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.441 nvme0n1 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.441 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:51.702 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.703 nvme0n1 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.703 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 nvme0n1 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.965 14:14:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.227 nvme0n1 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.227 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.228 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.490 nvme0n1 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.490 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 nvme0n1 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.752 14:14:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.012 nvme0n1 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.281 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.282 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.282 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.282 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.610 nvme0n1 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.610 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.871 nvme0n1 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.871 14:14:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 nvme0n1 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.134 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.395 nvme0n1 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.656 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.657 14:14:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.230 nvme0n1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.230 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.491 nvme0n1 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.491 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.751 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.752 14:14:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.012 nvme0n1 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.012 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.273 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.274 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.533 nvme0n1 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.533 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.794 14:14:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.055 nvme0n1 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.055 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJmYmI0ZWMxMzZlYTVhYmM0YTFkYjk3ZDg5MjU5Mjn2QDYQ: 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTJjOThlMzU0MWIzMmI3N2Y3NzA4YzkwNGViNTVkMDAyZTJmNjg0ODE5OWFiZDdkMTBjNzkwOWExMjZjMWFhNz1uiuI=: 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.316 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.889 nvme0n1 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.889 14:14:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.150 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.718 nvme0n1 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.718 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmEwN2I3Njc1OTZmZTkzNDlkMzIzNmM4M2Q5YzIwY2ZfsF5Y: 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZiMTZhZTY5Nzk2MTMyOWE4ZGQzNzM1NGJhOGY4OWXjaRMr: 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.978 14:14:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.548 nvme0n1 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJlZTZiZjYxMzM3MGFkNmU5MTZjN2U5M2YyODVjYWQwZDNkNWYyOTdiNTAwY2FmHkssBQ==: 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: ]] 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODNjOTQ0MDVlOGQ4MzY2NzEzM2NlN2ZmY2FiZmMwNmShBQo7: 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.548 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.808 14:14:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.377 nvme0n1 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhlYWYzZWY0OWExN2EwYjYyY2I5ZWZmNjZhYzYyMjNmYWNkMzY2ZTQ4MjJlN2RjYTIwNjk5NTczYjg2NWU5OXJYQfw=: 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.377 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.378 14:14:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.637 14:14:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.637 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.637 14:14:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.205 nvme0n1 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyN2NmYTkzNmEzZTQ4N2ZlYzQ3NGM3YWNiMWRmODQ5ODRmNDE1NWFmNGI2ZWU1Uy14eQ==: 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAyMmQwMDI0YjBlZDM1NzliMmQzOTMwYTNhOGQ1OTA3Y2IyNGNlNTRhNzAwN2EyUAWr7Q==: 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.205 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.465 request: 00:29:01.465 { 00:29:01.465 "name": "nvme0", 00:29:01.465 "trtype": "tcp", 00:29:01.465 "traddr": "10.0.0.1", 00:29:01.465 "adrfam": "ipv4", 00:29:01.465 "trsvcid": "4420", 00:29:01.465 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:01.465 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:01.465 "prchk_reftag": false, 00:29:01.465 "prchk_guard": false, 00:29:01.465 "hdgst": false, 00:29:01.465 "ddgst": false, 00:29:01.465 "method": "bdev_nvme_attach_controller", 00:29:01.465 "req_id": 1 00:29:01.465 } 00:29:01.465 Got JSON-RPC error response 00:29:01.465 response: 00:29:01.465 { 00:29:01.465 "code": -5, 00:29:01.465 "message": "Input/output error" 00:29:01.465 } 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.465 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.466 request: 00:29:01.466 { 00:29:01.466 "name": "nvme0", 00:29:01.466 "trtype": "tcp", 00:29:01.466 "traddr": "10.0.0.1", 00:29:01.466 "adrfam": "ipv4", 00:29:01.466 "trsvcid": "4420", 00:29:01.466 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:01.466 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:01.466 "prchk_reftag": false, 00:29:01.466 "prchk_guard": false, 00:29:01.466 "hdgst": false, 00:29:01.466 "ddgst": false, 00:29:01.466 "dhchap_key": "key2", 00:29:01.466 "method": "bdev_nvme_attach_controller", 00:29:01.466 "req_id": 1 00:29:01.466 } 00:29:01.466 Got JSON-RPC error response 00:29:01.466 response: 00:29:01.466 { 00:29:01.466 "code": -5, 00:29:01.466 "message": "Input/output error" 00:29:01.466 } 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.466 request: 00:29:01.466 { 00:29:01.466 "name": "nvme0", 00:29:01.466 "trtype": "tcp", 00:29:01.466 "traddr": "10.0.0.1", 00:29:01.466 "adrfam": "ipv4", 00:29:01.466 "trsvcid": "4420", 00:29:01.466 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:01.466 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:01.466 "prchk_reftag": false, 00:29:01.466 "prchk_guard": false, 00:29:01.466 "hdgst": false, 00:29:01.466 "ddgst": false, 00:29:01.466 "dhchap_key": "key1", 00:29:01.466 "dhchap_ctrlr_key": "ckey2", 00:29:01.466 "method": "bdev_nvme_attach_controller", 00:29:01.466 "req_id": 1 00:29:01.466 } 00:29:01.466 Got JSON-RPC error response 00:29:01.466 response: 00:29:01.466 { 00:29:01.466 "code": -5, 00:29:01.466 "message": "Input/output error" 00:29:01.466 } 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:01.466 rmmod nvme_tcp 00:29:01.466 rmmod nvme_fabrics 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1536183 ']' 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1536183 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1536183 ']' 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1536183 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.466 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1536183 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1536183' 00:29:01.726 killing process with pid 1536183 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1536183 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1536183 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.726 14:14:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:04.303 14:15:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:07.615 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:07.615 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:07.876 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:07.876 14:15:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BOG /tmp/spdk.key-null.Lc0 /tmp/spdk.key-sha256.Us5 /tmp/spdk.key-sha384.WNm /tmp/spdk.key-sha512.kmp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:07.876 14:15:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:12.082 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:12.082 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:12.082 00:29:12.082 real 0m58.686s 00:29:12.083 user 0m51.677s 00:29:12.083 sys 0m16.111s 00:29:12.083 14:15:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.083 14:15:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.083 ************************************ 00:29:12.083 END TEST nvmf_auth_host 00:29:12.083 ************************************ 00:29:12.083 14:15:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.083 14:15:09 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:12.083 14:15:09 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:12.083 14:15:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.083 14:15:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.083 14:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.083 ************************************ 00:29:12.083 START TEST nvmf_digest 00:29:12.083 ************************************ 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:12.083 * Looking for test storage... 00:29:12.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.083 14:15:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:20.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:20.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:20.223 Found net devices under 0000:31:00.0: cvl_0_0 00:29:20.223 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:20.224 Found net devices under 0000:31:00.1: cvl_0_1 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:20.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:29:20.224 00:29:20.224 --- 10.0.0.2 ping statistics --- 00:29:20.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.224 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:29:20.224 00:29:20.224 --- 10.0.0.1 ping statistics --- 00:29:20.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.224 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.224 ************************************ 00:29:20.224 START TEST nvmf_digest_clean 00:29:20.224 ************************************ 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1554039 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1554039 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1554039 ']' 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.224 14:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:20.224 [2024-07-15 14:15:17.461330] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:20.224 [2024-07-15 14:15:17.461388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.224 [2024-07-15 14:15:17.539905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.224 [2024-07-15 14:15:17.613071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.224 [2024-07-15 14:15:17.613108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.224 [2024-07-15 14:15:17.613115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.224 [2024-07-15 14:15:17.613122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.224 [2024-07-15 14:15:17.613128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.224 [2024-07-15 14:15:17.613146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.224 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.224 null0 00:29:20.485 [2024-07-15 14:15:18.339235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.485 [2024-07-15 14:15:18.363398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1554083 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1554083 /var/tmp/bperf.sock 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1554083 ']' 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.485 14:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.485 [2024-07-15 14:15:18.418620] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:20.485 [2024-07-15 14:15:18.418665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554083 ] 00:29:20.485 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.485 [2024-07-15 14:15:18.500383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.485 [2024-07-15 14:15:18.564770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.426 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.686 nvme0n1 00:29:21.686 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.686 14:15:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.686 Running I/O for 2 seconds... 00:29:24.233 00:29:24.233 Latency(us) 00:29:24.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:24.233 nvme0n1 : 2.00 19921.18 77.82 0.00 0.00 6416.95 2949.12 14199.47 00:29:24.233 =================================================================================================================== 00:29:24.233 Total : 19921.18 77.82 0.00 0.00 6416.95 2949.12 14199.47 00:29:24.233 0 00:29:24.233 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.233 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.233 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.233 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.234 | select(.opcode=="crc32c") 00:29:24.234 | "\(.module_name) \(.executed)"' 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1554083 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1554083 ']' 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1554083 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554083 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554083' 00:29:24.234 killing process with pid 1554083 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1554083 00:29:24.234 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.234 00:29:24.234 Latency(us) 00:29:24.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.234 =================================================================================================================== 00:29:24.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.234 14:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1554083 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1554839 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1554839 /var/tmp/bperf.sock 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1554839 ']' 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.234 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:24.234 [2024-07-15 14:15:22.152390] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:24.234 [2024-07-15 14:15:22.152498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554839 ] 00:29:24.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.234 Zero copy mechanism will not be used. 00:29:24.234 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.234 [2024-07-15 14:15:22.236714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.234 [2024-07-15 14:15:22.290108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.806 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.806 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:24.806 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:24.806 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:24.806 14:15:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:25.067 14:15:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.067 14:15:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.639 nvme0n1 00:29:25.639 14:15:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:25.639 14:15:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.639 Zero copy mechanism will not be used. 00:29:25.639 Running I/O for 2 seconds... 00:29:27.603 00:29:27.604 Latency(us) 00:29:27.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.604 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:27.604 nvme0n1 : 2.04 3345.45 418.18 0.00 0.00 4691.81 880.64 44782.93 00:29:27.604 =================================================================================================================== 00:29:27.604 Total : 3345.45 418.18 0.00 0.00 4691.81 880.64 44782.93 00:29:27.604 0 00:29:27.604 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.604 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.604 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.604 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.604 | select(.opcode=="crc32c") 00:29:27.604 | "\(.module_name) \(.executed)"' 00:29:27.604 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.864 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:27.864 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:27.864 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:27.864 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1554839 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1554839 ']' 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1554839 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554839 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554839' 00:29:27.865 killing process with pid 1554839 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1554839 00:29:27.865 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.865 00:29:27.865 Latency(us) 00:29:27.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.865 =================================================================================================================== 00:29:27.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.865 14:15:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1554839 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1555696 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1555696 /var/tmp/bperf.sock 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1555696 ']' 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.126 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.126 [2024-07-15 14:15:26.052699] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:28.126 [2024-07-15 14:15:26.052761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555696 ] 00:29:28.126 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.126 [2024-07-15 14:15:26.134487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.126 [2024-07-15 14:15:26.187988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.067 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.067 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:29.067 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:29.067 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:29.067 14:15:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:29.067 14:15:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.067 14:15:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.328 nvme0n1 00:29:29.328 14:15:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:29.328 14:15:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.589 Running I/O for 2 seconds... 00:29:31.499 00:29:31.499 Latency(us) 00:29:31.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.499 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.499 nvme0n1 : 2.00 21840.89 85.32 0.00 0.00 5852.20 2252.80 13762.56 00:29:31.499 =================================================================================================================== 00:29:31.499 Total : 21840.89 85.32 0.00 0.00 5852.20 2252.80 13762.56 00:29:31.499 0 00:29:31.499 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:31.499 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:31.499 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:31.499 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:31.499 | select(.opcode=="crc32c") 00:29:31.499 | "\(.module_name) \(.executed)"' 00:29:31.499 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1555696 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1555696 ']' 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1555696 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555696 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555696' 00:29:31.759 killing process with pid 1555696 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1555696 00:29:31.759 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.759 00:29:31.759 Latency(us) 00:29:31.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.759 =================================================================================================================== 00:29:31.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1555696 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1556443 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1556443 /var/tmp/bperf.sock 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1556443 ']' 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.759 14:15:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:32.019 [2024-07-15 14:15:29.902850] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:32.019 [2024-07-15 14:15:29.902904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556443 ] 00:29:32.019 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.019 Zero copy mechanism will not be used. 00:29:32.019 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.019 [2024-07-15 14:15:29.984453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.019 [2024-07-15 14:15:30.038825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.589 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.589 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:32.589 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:32.589 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:32.589 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:32.851 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.851 14:15:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.420 nvme0n1 00:29:33.420 14:15:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:33.420 14:15:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.420 Zero copy mechanism will not be used. 00:29:33.420 Running I/O for 2 seconds... 00:29:35.331 00:29:35.331 Latency(us) 00:29:35.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.331 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:35.331 nvme0n1 : 2.00 5012.77 626.60 0.00 0.00 3186.96 1856.85 12342.61 00:29:35.331 =================================================================================================================== 00:29:35.331 Total : 5012.77 626.60 0.00 0.00 3186.96 1856.85 12342.61 00:29:35.331 0 00:29:35.331 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:35.331 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:35.331 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:35.331 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:35.331 | select(.opcode=="crc32c") 00:29:35.331 | "\(.module_name) \(.executed)"' 00:29:35.331 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1556443 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1556443 ']' 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1556443 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556443 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556443' 00:29:35.591 killing process with pid 1556443 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1556443 00:29:35.591 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.591 00:29:35.591 Latency(us) 00:29:35.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.591 =================================================================================================================== 00:29:35.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1556443 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1554039 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1554039 ']' 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1554039 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.591 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554039 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554039' 00:29:35.851 killing process with pid 1554039 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1554039 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1554039 00:29:35.851 00:29:35.851 real 0m16.459s 00:29:35.851 user 0m32.217s 00:29:35.851 sys 0m3.472s 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.851 ************************************ 00:29:35.851 END TEST nvmf_digest_clean 00:29:35.851 ************************************ 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.851 ************************************ 00:29:35.851 START TEST nvmf_digest_error 00:29:35.851 ************************************ 00:29:35.851 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1557155 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1557155 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1557155 ']' 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:35.852 14:15:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.112 [2024-07-15 14:15:33.993535] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:36.112 [2024-07-15 14:15:33.993585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.112 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.112 [2024-07-15 14:15:34.070314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.112 [2024-07-15 14:15:34.143094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.112 [2024-07-15 14:15:34.143135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.112 [2024-07-15 14:15:34.143142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.112 [2024-07-15 14:15:34.143148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.112 [2024-07-15 14:15:34.143154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.112 [2024-07-15 14:15:34.143174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.683 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.683 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:36.683 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:36.683 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.683 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.944 [2024-07-15 14:15:34.809095] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.944 null0 00:29:36.944 [2024-07-15 14:15:34.889475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.944 [2024-07-15 14:15:34.913647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1557486 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1557486 /var/tmp/bperf.sock 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1557486 ']' 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.944 14:15:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.944 [2024-07-15 14:15:34.969289] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:36.944 [2024-07-15 14:15:34.969334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557486 ] 00:29:36.944 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.944 [2024-07-15 14:15:35.048090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.205 [2024-07-15 14:15:35.101760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.777 14:15:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.349 nvme0n1 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.349 14:15:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.349 Running I/O for 2 seconds... 00:29:38.349 [2024-07-15 14:15:36.392552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.392585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.392594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.349 [2024-07-15 14:15:36.405969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.405990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.405997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.349 [2024-07-15 14:15:36.416003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.416023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.416029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.349 [2024-07-15 14:15:36.429151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.429170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.429177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.349 [2024-07-15 14:15:36.442068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.442086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.442094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.349 [2024-07-15 14:15:36.455139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.349 [2024-07-15 14:15:36.455157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.349 [2024-07-15 14:15:36.455163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.609 [2024-07-15 14:15:36.464725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.464742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.464758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.478306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.478323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.478329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.491361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.491378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.491384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.503942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.503959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.503965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.516643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.516660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.516666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.527664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.527681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.527687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.539761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.539777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.539783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.552243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.552260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.552266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.565714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.565731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.565737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.578934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.578954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.578960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.590228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.590245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.590251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.601926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.601943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.601950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.614115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.614132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.614138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.626891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.626907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.626913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.639931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.639947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.639953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.651875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.651892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.651898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.665114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.665131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.665137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.674935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.674951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.674957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.688477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.688494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.688501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.702355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.702372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.702378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.610 [2024-07-15 14:15:36.714541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.610 [2024-07-15 14:15:36.714558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.610 [2024-07-15 14:15:36.714566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.727436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.727453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.727459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.739765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.739780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.739786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.749498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.749514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.749520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.763699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.763715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.776271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.776293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.789566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.789582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.789591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.801731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.801747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.801757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.811541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.811557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.811563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.824876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.824892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.824898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.837666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.837682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.837688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.850307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.850324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.850330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.863311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.863327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.863333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.872773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.872790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.887299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.872 [2024-07-15 14:15:36.887315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.872 [2024-07-15 14:15:36.887321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.872 [2024-07-15 14:15:36.899097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.899117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.899123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.912778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.912794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.912800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.926132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.926148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.926154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.938686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.938702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.938708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.948610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.948627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.948633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.963083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.963099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.963105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.873 [2024-07-15 14:15:36.974813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:38.873 [2024-07-15 14:15:36.974829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.873 [2024-07-15 14:15:36.974835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:36.987395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:36.987411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:36.987417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:36.999468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:36.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:36.999490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.010717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.010733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.010740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.024063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.024079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.024085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.034540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.034556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.034562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.046565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.046582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.046588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.059439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.059455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.059461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.070948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.070964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.070970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.084337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.084354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.084360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.096479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.096495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.096501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.109180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.109196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.134 [2024-07-15 14:15:37.109208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.134 [2024-07-15 14:15:37.119781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.134 [2024-07-15 14:15:37.119797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.119803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.133313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.133329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.133335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.145656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.145673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.145679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.158386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.158401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.158407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.171144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.171159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.171165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.181692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.181708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.181714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.194850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.194865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.194871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.206965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.206981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.206987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.219247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.219266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.219272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.231131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.231147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.231155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.135 [2024-07-15 14:15:37.243661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.135 [2024-07-15 14:15:37.243677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.135 [2024-07-15 14:15:37.243683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.255823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.255839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.268453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.268469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.268475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.279821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.279837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.279843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.292155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.292172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.292178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.304869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.304885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.304892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.318588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.318604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.318613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.329290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.329306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.329312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.342009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.342026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.355175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.355191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.355196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.394 [2024-07-15 14:15:37.367780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.394 [2024-07-15 14:15:37.367796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.394 [2024-07-15 14:15:37.367802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.378274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.378290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.378295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.391196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.391217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.404558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.404574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.404579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.416553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.416570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.416576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.429308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.429327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.429333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.440959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.440981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.452345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.452361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.452367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.464940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.464956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.464962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.477305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.477321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.477327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.489994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.490010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.490016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.395 [2024-07-15 14:15:37.500824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.395 [2024-07-15 14:15:37.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.395 [2024-07-15 14:15:37.500846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.654 [2024-07-15 14:15:37.514054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.654 [2024-07-15 14:15:37.514070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.654 [2024-07-15 14:15:37.514076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.526015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.526031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.526037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.537187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.537203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.537209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.551219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.551235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.551241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.562957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.562973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.562979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.575181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.575202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.587567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.587583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.587589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.601128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.601145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.601151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.610726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.610742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.610748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.624982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.624998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.625005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.636829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.636845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.636854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.647416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.647433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.660977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.660994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.661000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.673666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.673682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.673688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.683666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.683681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.696568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.696584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.696590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.710700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.710717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.710723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.723184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.723201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.723207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.734904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.734920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.734926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.747577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.747596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.747602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.655 [2024-07-15 14:15:37.760484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.655 [2024-07-15 14:15:37.760501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.655 [2024-07-15 14:15:37.760507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.773048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.773071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.782667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.782684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.782690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.796094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.796110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.796116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.809622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.809639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.809645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.821861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.821884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.834014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.834029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.834035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.845142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.845159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.845165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.859014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.859030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.859036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.870860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.870876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.870882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.880739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.916 [2024-07-15 14:15:37.880759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.916 [2024-07-15 14:15:37.880766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.916 [2024-07-15 14:15:37.896369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.896386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.896392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.909630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.909646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.922027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.922044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.922051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.932563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.932585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.945567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.945584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.945589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.960212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.960228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.960237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.973238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.973254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.973260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.984620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.984636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:37.995590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:37.995607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:37.995613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:38.008776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:38.008794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:38.008800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.917 [2024-07-15 14:15:38.020623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:39.917 [2024-07-15 14:15:38.020639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-07-15 14:15:38.020645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.033132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.033148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.033154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.046543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.046559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.057047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.057064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.057070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.071610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.071630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.071636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.082513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.082529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.094244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.094260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.094266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.107068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.107084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.107090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.119158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.119174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.177 [2024-07-15 14:15:38.119180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.177 [2024-07-15 14:15:38.131450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.177 [2024-07-15 14:15:38.131466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.131472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.143724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.143740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.143746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.156149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.156164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.156170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.168351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.168368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.168374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.178634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.178650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.178656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.191895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.191911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.191917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.204944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.204961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.204967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.217448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.217464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.217470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.229343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.229359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.229366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.241333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.241350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.241356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.252707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.252724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.252730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.266375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.266392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.266398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.178 [2024-07-15 14:15:38.278809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.178 [2024-07-15 14:15:38.278825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.178 [2024-07-15 14:15:38.278834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.291294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.291311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.291317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.304307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.304323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.304329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.314494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.314510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.314517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.326803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.326819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.326825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.339980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.339996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.340002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.351054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.351069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.351075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.363876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.363893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.363899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 [2024-07-15 14:15:38.376733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d7c70) 00:29:40.439 [2024-07-15 14:15:38.376749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.439 [2024-07-15 14:15:38.376760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.439 00:29:40.439 Latency(us) 00:29:40.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.439 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:40.439 nvme0n1 : 2.00 20674.93 80.76 0.00 0.00 6184.63 1829.55 15837.87 00:29:40.439 =================================================================================================================== 00:29:40.439 Total : 20674.93 80.76 0.00 0.00 6184.63 1829.55 15837.87 00:29:40.439 0 00:29:40.439 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.439 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.439 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.439 | .driver_specific 00:29:40.439 | .nvme_error 00:29:40.439 | .status_code 00:29:40.439 | .command_transient_transport_error' 00:29:40.439 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1557486 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1557486 ']' 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1557486 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557486 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557486' 00:29:40.699 killing process with pid 1557486 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1557486 00:29:40.699 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.699 00:29:40.699 Latency(us) 00:29:40.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.699 =================================================================================================================== 00:29:40.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1557486 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1558185 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1558185 /var/tmp/bperf.sock 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1558185 ']' 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:40.699 14:15:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.699 [2024-07-15 14:15:38.775042] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:40.699 [2024-07-15 14:15:38.775096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558185 ] 00:29:40.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.699 Zero copy mechanism will not be used. 00:29:40.699 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.959 [2024-07-15 14:15:38.857349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.959 [2024-07-15 14:15:38.910577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.529 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:41.529 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:41.529 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.529 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.790 14:15:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.050 nvme0n1 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:42.050 14:15:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.050 Zero copy mechanism will not be used. 00:29:42.050 Running I/O for 2 seconds... 00:29:42.050 [2024-07-15 14:15:40.151589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.050 [2024-07-15 14:15:40.151622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.050 [2024-07-15 14:15:40.151631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.050 [2024-07-15 14:15:40.162173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.050 [2024-07-15 14:15:40.162195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.050 [2024-07-15 14:15:40.162202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.173052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.173071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.173077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.183051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.183070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.183076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.193861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.193879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.193885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.203636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.203653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.203659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.212082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.212100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.212106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.222049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.222066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.222072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.231289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.231312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.238497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.238514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.238524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.248935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.248952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.258322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.258340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.258346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.269766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.269783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.269789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.278626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.278643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.278649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.288084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.288100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.288107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.296791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.296808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.296814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.308238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.308255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.308261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.318732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.318749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.318759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.329660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.329676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.339736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.339757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.311 [2024-07-15 14:15:40.339764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.311 [2024-07-15 14:15:40.349222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.311 [2024-07-15 14:15:40.349239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.349244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.361056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.361073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.369505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.369521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.369527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.379797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.379814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.379820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.387957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.387974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.387980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.398074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.398092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.398098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.406819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.406836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.406845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.415557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.415580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.312 [2024-07-15 14:15:40.424262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.312 [2024-07-15 14:15:40.424279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.312 [2024-07-15 14:15:40.424285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.573 [2024-07-15 14:15:40.434031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.573 [2024-07-15 14:15:40.434048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.573 [2024-07-15 14:15:40.434054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.573 [2024-07-15 14:15:40.444120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.573 [2024-07-15 14:15:40.444136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.573 [2024-07-15 14:15:40.444143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.573 [2024-07-15 14:15:40.456169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.456186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.456192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.467374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.467391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.467398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.475148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.475165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.475172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.482380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.482397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.482403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.491805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.491825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.491832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.502684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.502701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.502708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.510723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.510740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.510746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.519912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.519928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.519934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.530287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.530303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.530309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.540058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.540076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.540082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.551177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.551200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.560503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.560520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.560526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.569668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.569685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.569691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.578741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.578761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.578768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.587247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.587265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.587271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.595924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.595941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.595947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.605078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.605095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.605101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.615295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.615318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.625868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.625885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.634491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.634514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.643667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.643684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.643690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.655289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.655306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.655317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.665454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.665470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.665477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.674803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.674820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.674826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.574 [2024-07-15 14:15:40.681409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.574 [2024-07-15 14:15:40.681426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.574 [2024-07-15 14:15:40.681432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.688118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.688135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.688141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.694425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.694442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.694448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.700582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.700598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.700604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.706878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.706895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.706902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.713334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.713350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.713356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.720257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.720276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.720282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.726944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.726960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.726966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.733373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.733389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.733395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.740529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.740545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.740551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.747576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.747593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.747599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.755024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.755046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.762412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.762428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.762435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.769342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.769359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.777276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.777293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.777299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.787338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.787355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.787361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.794797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.794814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.794820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.801866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.801883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.801888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.808876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.808892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.808898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.814992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.815009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.815015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.821747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.821768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.821774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.827835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.827851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.827857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.833614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.833631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.833637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.839307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.839327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.839333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.845149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.845165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.845171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.851044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.851060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.837 [2024-07-15 14:15:40.851067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.837 [2024-07-15 14:15:40.856466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.837 [2024-07-15 14:15:40.856483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.856489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.861908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.861930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.867555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.867572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.867577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.872886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.872902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.872908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.878724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.878740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.878746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.884503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.884519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.884525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.889913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.889930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.889936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.895335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.895357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.900891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.900908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.900913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.906329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.906351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.911835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.911852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.911857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.919774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.919790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.919796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.930435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.930451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.930457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.938899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.938916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.938922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.838 [2024-07-15 14:15:40.948195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:42.838 [2024-07-15 14:15:40.948211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.838 [2024-07-15 14:15:40.948219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:40.956909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:40.956926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:40.956932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:40.965918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:40.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:40.965941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:40.976487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:40.976504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:40.976510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:40.983040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:40.983057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:40.983063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:40.993923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:40.993940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:40.993946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.001699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.001716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.010117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.010133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.010139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.018348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.018365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.018371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.027248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.027269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.027275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.036925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.036943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.046096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.055689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.055707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.055713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.066365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.066382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.066388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.075155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.075173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.075179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.082783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.082800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.082806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.093968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.093985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.100 [2024-07-15 14:15:41.093991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.100 [2024-07-15 14:15:41.102907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.100 [2024-07-15 14:15:41.102925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.102931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.111328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.111346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.111352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.120878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.120902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.129057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.129075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.129081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.136705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.136722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.145670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.145687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.145693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.156090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.156113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.163385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.163402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.163409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.171871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.171889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.171895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.179620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.179638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.188682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.188700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.188706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.198363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.198381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.198387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.101 [2024-07-15 14:15:41.206407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.101 [2024-07-15 14:15:41.206425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.101 [2024-07-15 14:15:41.206430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.363 [2024-07-15 14:15:41.216764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.363 [2024-07-15 14:15:41.216782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.363 [2024-07-15 14:15:41.216788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.363 [2024-07-15 14:15:41.226894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.226912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.226918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.236122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.236139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.236145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.245265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.245283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.245289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.253730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.253748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.253758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.264340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.264361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.264367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.274681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.274698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.274704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.281584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.281602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.281608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.291879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.291897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.291903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.301630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.301648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.301654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.312418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.312442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.321996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.322014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.322020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.331169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.331193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.340846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.340864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.340870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.351843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.351861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.351867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.362871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.362894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.372394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.372412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.372418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.380764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.380782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.380788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.388128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.388146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.388152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.397993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.398010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.398016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.409218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.409236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.409242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.420383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.420401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.420407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.431807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.431824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.431833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.441343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.441361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.441367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.451091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.451109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.451115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.459577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.459595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.459602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.364 [2024-07-15 14:15:41.470280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.364 [2024-07-15 14:15:41.470297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.364 [2024-07-15 14:15:41.470303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.481589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.481607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.481613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.492769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.492787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.492793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.503812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.503830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.503836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.513420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.513438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.513444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.522355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.522373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.522379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.532698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.532716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.532722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.543385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.543404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.543410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.552039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.552057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.562416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.562434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.562440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.571271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.571289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.571295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.580524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.580542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.580548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.588553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.588571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.597160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.597178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.597187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.606586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.606604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.606610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.614475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.614493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.614499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.623452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.623470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.623476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.633276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.633293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.633300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.644259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.644276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.644282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.654150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.654167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.654174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.661961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.661979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.661985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.672381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.672398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.672404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.679324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.679344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.679350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.688217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.627 [2024-07-15 14:15:41.688235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.627 [2024-07-15 14:15:41.688241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.627 [2024-07-15 14:15:41.698272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.628 [2024-07-15 14:15:41.698290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.628 [2024-07-15 14:15:41.698296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.628 [2024-07-15 14:15:41.708931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.628 [2024-07-15 14:15:41.708949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.628 [2024-07-15 14:15:41.708955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.628 [2024-07-15 14:15:41.717630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.628 [2024-07-15 14:15:41.717647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.628 [2024-07-15 14:15:41.717653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.628 [2024-07-15 14:15:41.726283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.628 [2024-07-15 14:15:41.726300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.628 [2024-07-15 14:15:41.726306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.628 [2024-07-15 14:15:41.736350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.628 [2024-07-15 14:15:41.736368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.628 [2024-07-15 14:15:41.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.746373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.746391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.746397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.756097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.756115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.756122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.765779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.765797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.774091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.774109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.774115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.781989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.782007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.782013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.793043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.793061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.802904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.802922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.813099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.813116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.813122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.823989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.824007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.824013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.834288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.834306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.890 [2024-07-15 14:15:41.834312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.890 [2024-07-15 14:15:41.845766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.890 [2024-07-15 14:15:41.845783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.845793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.854715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.854733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.854739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.862554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.862572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.862578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.871689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.871707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.871713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.880514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.880532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.888926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.888950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.900435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.900453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.900459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.908391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.908409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.908415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.916971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.916988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.916995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.927102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.927123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.927128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.938488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.938513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.947822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.947841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.947848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.958108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.958126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.958132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.968205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.968223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.968229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.979069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.979087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.979093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.985879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.985903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.891 [2024-07-15 14:15:41.996237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:43.891 [2024-07-15 14:15:41.996255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.891 [2024-07-15 14:15:41.996261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.153 [2024-07-15 14:15:42.005539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.153 [2024-07-15 14:15:42.005557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.153 [2024-07-15 14:15:42.005563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.153 [2024-07-15 14:15:42.016546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.153 [2024-07-15 14:15:42.016564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.153 [2024-07-15 14:15:42.016570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.153 [2024-07-15 14:15:42.025938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.153 [2024-07-15 14:15:42.025955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.153 [2024-07-15 14:15:42.025961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.153 [2024-07-15 14:15:42.036439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.153 [2024-07-15 14:15:42.036457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.153 [2024-07-15 14:15:42.036463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.153 [2024-07-15 14:15:42.047044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.153 [2024-07-15 14:15:42.047061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.153 [2024-07-15 14:15:42.047067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.057262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.057280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.057286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.066281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.066299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.066305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.074887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.074905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.074911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.083847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.083865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.083871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.093792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.093809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.093819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.105310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.105328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.105333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.113765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.113782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.113788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.122463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.122480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.122486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.154 [2024-07-15 14:15:42.132546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9920f0) 00:29:44.154 [2024-07-15 14:15:42.132563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.154 [2024-07-15 14:15:42.132569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.154 00:29:44.154 Latency(us) 00:29:44.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.154 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:44.154 nvme0n1 : 2.00 3431.63 428.95 0.00 0.00 4660.18 1099.09 15291.73 00:29:44.154 =================================================================================================================== 00:29:44.154 Total : 3431.63 428.95 0.00 0.00 4660.18 1099.09 15291.73 00:29:44.154 0 00:29:44.154 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:44.154 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:44.154 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:44.154 | .driver_specific 00:29:44.154 | .nvme_error 00:29:44.154 | .status_code 00:29:44.154 | .command_transient_transport_error' 00:29:44.154 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1558185 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1558185 ']' 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1558185 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558185 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558185' 00:29:44.415 killing process with pid 1558185 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1558185 00:29:44.415 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.415 00:29:44.415 Latency(us) 00:29:44.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.415 =================================================================================================================== 00:29:44.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1558185 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1558867 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1558867 /var/tmp/bperf.sock 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1558867 ']' 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.415 14:15:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.677 [2024-07-15 14:15:42.554607] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:44.677 [2024-07-15 14:15:42.554667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558867 ] 00:29:44.677 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.677 [2024-07-15 14:15:42.635530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.677 [2024-07-15 14:15:42.687431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.255 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.256 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:45.256 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.256 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.520 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.780 nvme0n1 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.780 14:15:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:46.041 Running I/O for 2 seconds... 00:29:46.041 [2024-07-15 14:15:43.978173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190eb760 00:29:46.041 [2024-07-15 14:15:43.979910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.041 [2024-07-15 14:15:43.979938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:46.041 [2024-07-15 14:15:43.988443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.041 [2024-07-15 14:15:43.989542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.041 [2024-07-15 14:15:43.989561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.041 [2024-07-15 14:15:44.000215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.001301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.001317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.012112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.013204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.013221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.023906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.025017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.025032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.035648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.036741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.036764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.047380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.048478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.048493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.059129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.060181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.060197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.070862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.071943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.071959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.082593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.083697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.083713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.094321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.095421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.095437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.106051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.107105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.107120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.117780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.118837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.118853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.129515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.130564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.130579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.141220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.142317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.142333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.042 [2024-07-15 14:15:44.152996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.042 [2024-07-15 14:15:44.154054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.042 [2024-07-15 14:15:44.154070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.164721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.165814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.165830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.176443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.177496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.177511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.188156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.189252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.189268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.199886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.200939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.200954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.211597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.212686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.212702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.223330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.224420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.224436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.235040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.236124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.236139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.304 [2024-07-15 14:15:44.246746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.304 [2024-07-15 14:15:44.247838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.304 [2024-07-15 14:15:44.247854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.258459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.259551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.259566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.270179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.271269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.271285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.281894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.282990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.283005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.293608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.294696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.294711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.305328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.306422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.306438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.317044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.318139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.318155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.328743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.329813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.329829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.340458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.341569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.352186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.353297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.353313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.363909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.365019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.375614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.376725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.387331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.388440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.388455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.399057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.400128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.400145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.305 [2024-07-15 14:15:44.410857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.305 [2024-07-15 14:15:44.411963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.305 [2024-07-15 14:15:44.411978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.422585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.423678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.423693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.434311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.435420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.446027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.447125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.447141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.457741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.458827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.458842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.469448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.470497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.470513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.481175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.482271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.482286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.492895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.493984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.494000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.504616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.505672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.505688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.516312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.517406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.517421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.528030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.529100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.529115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.539769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.540837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.540852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.551482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.552580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.552595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.563394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.564445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.564461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.575115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.576198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.586834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.587886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.587901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.598535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.599649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.610262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.611358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.611372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.621986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.623079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.623094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.633707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.634794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.634809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.645427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.646541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.657158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.658245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.668893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.568 [2024-07-15 14:15:44.669995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.568 [2024-07-15 14:15:44.670010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.568 [2024-07-15 14:15:44.680645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.681701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.681717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.692393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.693493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.693508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.704099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.705191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.705206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.715823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.716925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.716941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.727520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.728614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.728629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.739229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.740274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.740290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.750927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.752023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.752038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.762638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.763723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.763738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.774344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.775437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.786071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.787163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.787179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.797786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.798879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.830 [2024-07-15 14:15:44.798895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.830 [2024-07-15 14:15:44.809531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.830 [2024-07-15 14:15:44.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.810645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.821255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.822349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.822365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.832979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.834050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.844683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.845774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.845790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.856422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.857520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.857536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.868166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.869314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.869331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.879966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f81e0 00:29:46.831 [2024-07-15 14:15:44.881050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.881065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.893369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190ef6a8 00:29:46.831 [2024-07-15 14:15:44.895094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.895109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.905057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:46.831 [2024-07-15 14:15:44.906768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.906784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.914867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f2d80 00:29:46.831 [2024-07-15 14:15:44.916075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.916091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.929419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f31b8 00:29:46.831 [2024-07-15 14:15:44.931456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.931471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:46.831 [2024-07-15 14:15:44.941106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f7970 00:29:46.831 [2024-07-15 14:15:44.943107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.831 [2024-07-15 14:15:44.943122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:44.951309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:44.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:44.952713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:44.963058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:44.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:44.964449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:44.974771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:44.976142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:44.976158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:44.986495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:44.987894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:44.987910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:44.998221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:44.999622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:44.999638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.010041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.011436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.011451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.021758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.023118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.023134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.033479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.034831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.045193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.046568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.046584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.056942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.058327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.058343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.068659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.070064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.080389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.081785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.092099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.093491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.093507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.103819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.093 [2024-07-15 14:15:45.105204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.093 [2024-07-15 14:15:45.105220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.093 [2024-07-15 14:15:45.115536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.116939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.116954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.127284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.128673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.128689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.139007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.140394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.140409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.150724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.152135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.152152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.162453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.163825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.163841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.174170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.175563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.175578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.185895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.187285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.187300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.094 [2024-07-15 14:15:45.197636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.094 [2024-07-15 14:15:45.199029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.094 [2024-07-15 14:15:45.199044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.355 [2024-07-15 14:15:45.209372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.210778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.221088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.222467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.222483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.232798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.234181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.234196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.244522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.245913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.245928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.256253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.257638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.257656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.267985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.269373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.269389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.279706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.281127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.281143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.291430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.292823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.292838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.303196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.304554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.304569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.314930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.316275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.316291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.326664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.328059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.328075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.338396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.339775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.339791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.350097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.351481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.361827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.363220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.363235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.373554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.374932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.374948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.385282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.386673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.386688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.397005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.398387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.398402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.408740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.410129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.410144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.420464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.421859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.421874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.432339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.433678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.433695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.444077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.445465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.445481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.455828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.457212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.457231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.356 [2024-07-15 14:15:45.467541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.356 [2024-07-15 14:15:45.468952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.356 [2024-07-15 14:15:45.468968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.479265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.480653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.480668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.490986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.492370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.492385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.502733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.504144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.514455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.515847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.526191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.527595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.527610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.618 [2024-07-15 14:15:45.537905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.618 [2024-07-15 14:15:45.539289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.618 [2024-07-15 14:15:45.539304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.549623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.550990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.551005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.561534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.562920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.562938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.573264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.574670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.584972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.586358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.586373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.596687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.598086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.608399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.609740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.609759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.620110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.621495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.621510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.631837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.633220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.633235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.643554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.644942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.644957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.655271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.666981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.668383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.678677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.680069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.680085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.690416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.691807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.691823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.702148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.703550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.713867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.715254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.715269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.619 [2024-07-15 14:15:45.725563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.619 [2024-07-15 14:15:45.726930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.619 [2024-07-15 14:15:45.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.737285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.738664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.738680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.748985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.750373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.750389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.760704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.762105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.762123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.772404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.773791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.773806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.784133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.785516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.785531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.795839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.797237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.797252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.807541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.808942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.808958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.819256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.820626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.820642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.830958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.832334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.832350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.842660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.844011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.854395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.855783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.855799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.866102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.867528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.867543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.877839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.879218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.889575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.890978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.901292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.902690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.902706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.913005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.914386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.914402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.924720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.926100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.882 [2024-07-15 14:15:45.936425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.882 [2024-07-15 14:15:45.937783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.882 [2024-07-15 14:15:45.937798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.883 [2024-07-15 14:15:45.948177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.883 [2024-07-15 14:15:45.949566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.883 [2024-07-15 14:15:45.949581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.883 [2024-07-15 14:15:45.959892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743ac0) with pdu=0x2000190f3a28 00:29:47.883 [2024-07-15 14:15:45.961290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.883 [2024-07-15 14:15:45.961306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.883 00:29:47.883 Latency(us) 00:29:47.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.883 nvme0n1 : 2.00 21685.62 84.71 0.00 0.00 5894.81 2239.15 16711.68 00:29:47.883 =================================================================================================================== 00:29:47.883 Total : 21685.62 84.71 0.00 0.00 5894.81 2239.15 16711.68 00:29:47.883 0 00:29:47.883 14:15:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.883 14:15:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.883 14:15:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.883 | .driver_specific 00:29:47.883 | .nvme_error 00:29:47.883 | .status_code 00:29:47.883 | .command_transient_transport_error' 00:29:47.883 14:15:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1558867 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1558867 ']' 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1558867 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558867 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558867' 00:29:48.145 killing process with pid 1558867 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1558867 00:29:48.145 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.145 00:29:48.145 Latency(us) 00:29:48.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.145 =================================================================================================================== 00:29:48.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.145 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1558867 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1559558 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1559558 /var/tmp/bperf.sock 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1559558 ']' 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.436 14:15:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.436 [2024-07-15 14:15:46.351212] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:48.436 [2024-07-15 14:15:46.351267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559558 ] 00:29:48.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:48.436 Zero copy mechanism will not be used. 00:29:48.436 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.436 [2024-07-15 14:15:46.431966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.436 [2024-07-15 14:15:46.485477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.382 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.644 nvme0n1 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.644 14:15:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:49.644 Zero copy mechanism will not be used. 00:29:49.644 Running I/O for 2 seconds... 00:29:49.644 [2024-07-15 14:15:47.672192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.672532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.672559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.681112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.681468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.681487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.687437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.687791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.687808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.697552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.697871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.697888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.706135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.706259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.706274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.714951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.715310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.715327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.722021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.722351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.729245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.729455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.729471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.738926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.739140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.739155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.744068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.744289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.748404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.748605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.644 [2024-07-15 14:15:47.748621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.644 [2024-07-15 14:15:47.753934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.644 [2024-07-15 14:15:47.754244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.645 [2024-07-15 14:15:47.754261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.762038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.762236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.762252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.767162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.767359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.767374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.775447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.775783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.784234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.784588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.784605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.791989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.792306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.792322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.800275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.806015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.806325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.812572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.813003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.813021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.818939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.819302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.824153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.824352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.834278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.834636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.834653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.844606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.845002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.845019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.854160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.907 [2024-07-15 14:15:47.854534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.907 [2024-07-15 14:15:47.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.907 [2024-07-15 14:15:47.862681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.862933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.862949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.871103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.871439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.871458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.881989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.882297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.882314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.890689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.891099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.891117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.900783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.901009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.901024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.909971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.910286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.910304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.919256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.919460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.919476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.929066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.929440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.929457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.938783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.939112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.939128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.949163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.949495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.949512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.957770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.958129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.958146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.968133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.968482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.968499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.977415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.977853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.977871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.986168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.986500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.986516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.994585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:47.994801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:47.994817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:47.999777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:48.000178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:48.000195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:48.004776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:48.005156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:48.005172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:48.009849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:48.010225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:48.010241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.908 [2024-07-15 14:15:48.015476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:49.908 [2024-07-15 14:15:48.015906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.908 [2024-07-15 14:15:48.015923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.023288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.023507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.031401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.031816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.031833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.039594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.039938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.039955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.047114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.047315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.047331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.054112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.054434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.054451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.060560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.060896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.060912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.067576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.067938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.067955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.076043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.076366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.076383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.082768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.082970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.170 [2024-07-15 14:15:48.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.170 [2024-07-15 14:15:48.091815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.170 [2024-07-15 14:15:48.092139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.092156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.101096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.101480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.101497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.109690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.110087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.110104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.117823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.118155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.118172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.124077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.124397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.124414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.130458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.130901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.130919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.136947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.137251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.137267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.142036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.142385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.142401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.146894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.147203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.147220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.152341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.152677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.158807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.159040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.159056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.163423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.163619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.163635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.168477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.168811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.168828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.176996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.177384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.177400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.183504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.183699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.183715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.188090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.188415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.188432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.193705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.193908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.193924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.200200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.200579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.204765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.204963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.204979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.210424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.210719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.210736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.216577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.216871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.216887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.221431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.221762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.228048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.228367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.228383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.235815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.236015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.241896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.242186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.242204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.250681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.250883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.250902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.255462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.255661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.255677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.260094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.260397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.260414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.265611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.265811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.171 [2024-07-15 14:15:48.265827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.171 [2024-07-15 14:15:48.269552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.171 [2024-07-15 14:15:48.269773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.172 [2024-07-15 14:15:48.269790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.172 [2024-07-15 14:15:48.274208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.172 [2024-07-15 14:15:48.274404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.172 [2024-07-15 14:15:48.274419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.172 [2024-07-15 14:15:48.278724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.172 [2024-07-15 14:15:48.279037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.172 [2024-07-15 14:15:48.279054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.283968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.284315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.284332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.291766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.292078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.298323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.298521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.298536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.305791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.306294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.306311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.311777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.312155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.312172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.316733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.317043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.317060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.323883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.324174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.324191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.329114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.329444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.329461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.334398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.334724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.334741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.339219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.434 [2024-07-15 14:15:48.339549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.434 [2024-07-15 14:15:48.339565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.434 [2024-07-15 14:15:48.345814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.346013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.346028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.352649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.352934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.352951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.358388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.358646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.358662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.365988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.366290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.366307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.374076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.374293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.374309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.383345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.383670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.383687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.390812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.400117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.400456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.400473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.409035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.409349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.409366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.418398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.418858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.429297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.429654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.429671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.440497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.440867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.440884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.452267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.452595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.452611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.464252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.464705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.464721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.475626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.475865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.475881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.484825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.485173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.485190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.490287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.490688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.490705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.495843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.496170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.500506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.500832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.505057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.505319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.505334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.510691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.510990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.511007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.515745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.516076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.516092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.520242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.520447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.520463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.524958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.525151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.525167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.530676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.530996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.531013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.536344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.536636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.536653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.435 [2024-07-15 14:15:48.541285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.435 [2024-07-15 14:15:48.541480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.435 [2024-07-15 14:15:48.541500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.697 [2024-07-15 14:15:48.548848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.697 [2024-07-15 14:15:48.549044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.697 [2024-07-15 14:15:48.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.697 [2024-07-15 14:15:48.553854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.697 [2024-07-15 14:15:48.554050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.697 [2024-07-15 14:15:48.554065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.697 [2024-07-15 14:15:48.560234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.697 [2024-07-15 14:15:48.560551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.697 [2024-07-15 14:15:48.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.697 [2024-07-15 14:15:48.567901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.697 [2024-07-15 14:15:48.568246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.697 [2024-07-15 14:15:48.568262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.697 [2024-07-15 14:15:48.572974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.697 [2024-07-15 14:15:48.573370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.573387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.581976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.582306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.582323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.588094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.588293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.588308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.594862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.595311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.595327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.606013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.606344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.606361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.613482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.613905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.613923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.624146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.624537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.624554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.632931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.633282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.633299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.639024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.639254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.645932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.646340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.646357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.653414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.653596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.660342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.660711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.660728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.666596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.666799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.666815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.673679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.673882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.673897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.678159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.678463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.678480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.684951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.685270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.685287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.692958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.693214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.693229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.698418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.698731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.698748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.703284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.703608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.703624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.707796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.708027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.708043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.713981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.714286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.719482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.719688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.727885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.728203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.698 [2024-07-15 14:15:48.728220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.698 [2024-07-15 14:15:48.733177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.698 [2024-07-15 14:15:48.733463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.733480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.738542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.738853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.738870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.743211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.743398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.743414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.747546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.747860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.747876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.751686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.755392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.755592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.760594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.760787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.760802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.766404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.766593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.766609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.771412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.771595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.771611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.775172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.775354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.775371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.781830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.782014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.782029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.787353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.787652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.794966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.795403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.802469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.802655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.699 [2024-07-15 14:15:48.808886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.699 [2024-07-15 14:15:48.809073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.699 [2024-07-15 14:15:48.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.813965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.814260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.819918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.820107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.820123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.824130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.824433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.824449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.829487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.829767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.829783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.836907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.837194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.837210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.843841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.844172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.844189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.848381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.848680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.848697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.852623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.852816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.852831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.859437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.859899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.866246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.866453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.866472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.875269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.875555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.875571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.881851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.882039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.882055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.892656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.892893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.892910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.902883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.902995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.903010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.913133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.913355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.923736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.923954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.923970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.933868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.961 [2024-07-15 14:15:48.934294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.961 [2024-07-15 14:15:48.934310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.961 [2024-07-15 14:15:48.944437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.944830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.944847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.951530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.951875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.951892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.957068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.957512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.965628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.965827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.965843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.974135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.974441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.974458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.984260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.984604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.984620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:48.996086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:48.996521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:48.996538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.006811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.007105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.007122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.015176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.015558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.015575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.023744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.024155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.024175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.031423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.031749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.031771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.039414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.039756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.039773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.047528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.047873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.047890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.055571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.055917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.055933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.063261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.063464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.063480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.962 [2024-07-15 14:15:49.070079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:50.962 [2024-07-15 14:15:49.070304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.962 [2024-07-15 14:15:49.070320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.077400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.077801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.086951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.087304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.087321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.096180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.096497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.105988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.106239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.106255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.116404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.116878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.116894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.128390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.128723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.128740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.224 [2024-07-15 14:15:49.140102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.224 [2024-07-15 14:15:49.140463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.224 [2024-07-15 14:15:49.140479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.152004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.152516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.164859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.165059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.165075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.177616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.178103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.178120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.189157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.189427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.189443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.201423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.201860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.201876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.211392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.211612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.211628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.222342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.222714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.222731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.232682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.233043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.233060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.243832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.244035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.244051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.255710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.256184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.256201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.267109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.267459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.267476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.280559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.280934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.280951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.291850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.292203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.292222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.301164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.301561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.301577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.310655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.311128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.319200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.319557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.319574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.327821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.328184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.225 [2024-07-15 14:15:49.337273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.225 [2024-07-15 14:15:49.337463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.225 [2024-07-15 14:15:49.337479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.487 [2024-07-15 14:15:49.345094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.487 [2024-07-15 14:15:49.345310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.487 [2024-07-15 14:15:49.345326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.487 [2024-07-15 14:15:49.352042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.487 [2024-07-15 14:15:49.352233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.487 [2024-07-15 14:15:49.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.487 [2024-07-15 14:15:49.361476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.361797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.361814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.368924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.369235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.369252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.377213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.377518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.377534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.386604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.386815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.386831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.395757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.396092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.396108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.404424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.404778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.404794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.414339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.414709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.414726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.423549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.423746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.423766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.432735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.433050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.433067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.441631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.441920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.441937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.451350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.451543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.451559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.461224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.461506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.470673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.470997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.471014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.480601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.480943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.480959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.490953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.491333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.491349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.501778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.502105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.502121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.511375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.511723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.511740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.520173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.520628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.520645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.530102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.530542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.530562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.538288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.538478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.538494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.545211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.545427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.545442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.552051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.552443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.488 [2024-07-15 14:15:49.552460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.488 [2024-07-15 14:15:49.560910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.488 [2024-07-15 14:15:49.561103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.561120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.489 [2024-07-15 14:15:49.568577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.489 [2024-07-15 14:15:49.568772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.568787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.489 [2024-07-15 14:15:49.576161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.489 [2024-07-15 14:15:49.576654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.576671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.489 [2024-07-15 14:15:49.584227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.489 [2024-07-15 14:15:49.584464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.584479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.489 [2024-07-15 14:15:49.591834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.489 [2024-07-15 14:15:49.592143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.592160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.489 [2024-07-15 14:15:49.598243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.489 [2024-07-15 14:15:49.598587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.489 [2024-07-15 14:15:49.598604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.604637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.604919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.604935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.612554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.612877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.619000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.619304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.619320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.625128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.625433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.625449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.632246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.632474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.632489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.639589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.639891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.639908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.648163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.648526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.648543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.654544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.654979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.654997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.751 [2024-07-15 14:15:49.663825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1743bf0) with pdu=0x2000190fef90 00:29:51.751 [2024-07-15 14:15:49.664017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.751 [2024-07-15 14:15:49.664033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.751 00:29:51.751 Latency(us) 00:29:51.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.751 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:51.751 nvme0n1 : 2.01 4041.88 505.23 0.00 0.00 3951.73 1815.89 12888.75 00:29:51.751 =================================================================================================================== 00:29:51.751 Total : 4041.88 505.23 0.00 0.00 3951.73 1815.89 12888.75 00:29:51.751 0 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.751 | .driver_specific 00:29:51.751 | .nvme_error 00:29:51.751 | .status_code 00:29:51.751 | .command_transient_transport_error' 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 261 > 0 )) 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1559558 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1559558 ']' 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1559558 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.751 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1559558 00:29:52.012 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:52.012 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:52.012 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1559558' 00:29:52.012 killing process with pid 1559558 00:29:52.012 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1559558 00:29:52.012 Received shutdown signal, test time was about 2.000000 seconds 00:29:52.012 00:29:52.012 Latency(us) 00:29:52.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.012 =================================================================================================================== 00:29:52.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.012 14:15:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1559558 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1557155 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1557155 ']' 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1557155 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557155 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557155' 00:29:52.012 killing process with pid 1557155 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1557155 00:29:52.012 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1557155 00:29:52.272 00:29:52.273 real 0m16.277s 00:29:52.273 user 0m32.074s 00:29:52.273 sys 0m3.297s 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.273 ************************************ 00:29:52.273 END TEST nvmf_digest_error 00:29:52.273 ************************************ 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.273 rmmod nvme_tcp 00:29:52.273 rmmod nvme_fabrics 00:29:52.273 rmmod nvme_keyring 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1557155 ']' 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1557155 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1557155 ']' 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1557155 00:29:52.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1557155) - No such process 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1557155 is not found' 00:29:52.273 Process with pid 1557155 is not found 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.273 14:15:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.817 14:15:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:54.817 00:29:54.817 real 0m42.704s 00:29:54.817 user 1m6.408s 00:29:54.817 sys 0m12.462s 00:29:54.817 14:15:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:54.817 14:15:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:54.817 ************************************ 00:29:54.817 END TEST nvmf_digest 00:29:54.817 ************************************ 00:29:54.817 14:15:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:54.817 14:15:52 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:54.817 14:15:52 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:54.817 14:15:52 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:54.817 14:15:52 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:54.817 14:15:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:54.817 14:15:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:54.817 14:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:54.817 ************************************ 00:29:54.817 START TEST nvmf_bdevperf 00:29:54.817 ************************************ 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:54.817 * Looking for test storage... 00:29:54.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.817 14:15:52 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.818 14:15:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:02.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:02.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:02.973 Found net devices under 0000:31:00.0: cvl_0_0 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:02.973 Found net devices under 0000:31:00.1: cvl_0_1 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:30:02.973 00:30:02.973 --- 10.0.0.2 ping statistics --- 00:30:02.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.973 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:30:02.973 00:30:02.973 --- 10.0.0.1 ping statistics --- 00:30:02.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.973 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1564927 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1564927 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1564927 ']' 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.973 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.974 14:16:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.974 [2024-07-15 14:16:00.764689] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:02.974 [2024-07-15 14:16:00.764765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.974 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.974 [2024-07-15 14:16:00.864621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.974 [2024-07-15 14:16:00.958796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.974 [2024-07-15 14:16:00.958880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.974 [2024-07-15 14:16:00.958889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.974 [2024-07-15 14:16:00.958896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.974 [2024-07-15 14:16:00.958903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.974 [2024-07-15 14:16:00.959048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.974 [2024-07-15 14:16:00.959342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.974 [2024-07-15 14:16:00.959344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 [2024-07-15 14:16:01.574560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 Malloc0 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 [2024-07-15 14:16:01.638020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.545 { 00:30:03.545 "params": { 00:30:03.545 "name": "Nvme$subsystem", 00:30:03.545 "trtype": "$TEST_TRANSPORT", 00:30:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.545 "adrfam": "ipv4", 00:30:03.545 "trsvcid": "$NVMF_PORT", 00:30:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.545 "hdgst": ${hdgst:-false}, 00:30:03.545 "ddgst": ${ddgst:-false} 00:30:03.545 }, 00:30:03.545 "method": "bdev_nvme_attach_controller" 00:30:03.545 } 00:30:03.545 EOF 00:30:03.545 )") 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:03.545 14:16:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:03.545 "params": { 00:30:03.545 "name": "Nvme1", 00:30:03.545 "trtype": "tcp", 00:30:03.545 "traddr": "10.0.0.2", 00:30:03.545 "adrfam": "ipv4", 00:30:03.545 "trsvcid": "4420", 00:30:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.545 "hdgst": false, 00:30:03.545 "ddgst": false 00:30:03.545 }, 00:30:03.545 "method": "bdev_nvme_attach_controller" 00:30:03.545 }' 00:30:03.804 [2024-07-15 14:16:01.700589] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:03.804 [2024-07-15 14:16:01.700677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565275 ] 00:30:03.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.804 [2024-07-15 14:16:01.768505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.804 [2024-07-15 14:16:01.833093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.064 Running I/O for 1 seconds... 00:30:05.001 00:30:05.001 Latency(us) 00:30:05.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:05.001 Verification LBA range: start 0x0 length 0x4000 00:30:05.001 Nvme1n1 : 1.01 8957.46 34.99 0.00 0.00 14223.09 3072.00 14527.15 00:30:05.001 =================================================================================================================== 00:30:05.001 Total : 8957.46 34.99 0.00 0.00 14223.09 3072.00 14527.15 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1565565 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.265 { 00:30:05.265 "params": { 00:30:05.265 "name": "Nvme$subsystem", 00:30:05.265 "trtype": "$TEST_TRANSPORT", 00:30:05.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.265 "adrfam": "ipv4", 00:30:05.265 "trsvcid": "$NVMF_PORT", 00:30:05.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.265 "hdgst": ${hdgst:-false}, 00:30:05.265 "ddgst": ${ddgst:-false} 00:30:05.265 }, 00:30:05.265 "method": "bdev_nvme_attach_controller" 00:30:05.265 } 00:30:05.265 EOF 00:30:05.265 )") 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:05.265 14:16:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.265 "params": { 00:30:05.265 "name": "Nvme1", 00:30:05.265 "trtype": "tcp", 00:30:05.265 "traddr": "10.0.0.2", 00:30:05.265 "adrfam": "ipv4", 00:30:05.265 "trsvcid": "4420", 00:30:05.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.266 "hdgst": false, 00:30:05.266 "ddgst": false 00:30:05.266 }, 00:30:05.266 "method": "bdev_nvme_attach_controller" 00:30:05.266 }' 00:30:05.266 [2024-07-15 14:16:03.215456] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:05.266 [2024-07-15 14:16:03.215512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565565 ] 00:30:05.266 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.266 [2024-07-15 14:16:03.280095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.266 [2024-07-15 14:16:03.343702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.833 Running I/O for 15 seconds... 00:30:08.389 14:16:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1564927 00:30:08.389 14:16:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:08.389 [2024-07-15 14:16:06.182380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.389 [2024-07-15 14:16:06.182419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.389 [2024-07-15 14:16:06.182777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.389 [2024-07-15 14:16:06.182786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.182986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.182993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.390 [2024-07-15 14:16:06.183463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.390 [2024-07-15 14:16:06.183470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.183988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.183997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.391 [2024-07-15 14:16:06.184088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.391 [2024-07-15 14:16:06.184170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.391 [2024-07-15 14:16:06.184179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.392 [2024-07-15 14:16:06.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.392 [2024-07-15 14:16:06.184564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0f940 is same with the state(5) to be set 00:30:08.392 [2024-07-15 14:16:06.184580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:08.392 [2024-07-15 14:16:06.184586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:08.392 [2024-07-15 14:16:06.184593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:30:08.392 [2024-07-15 14:16:06.184602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184641] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0f940 was disconnected and freed. reset controller. 00:30:08.392 [2024-07-15 14:16:06.184681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.392 [2024-07-15 14:16:06.184691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.392 [2024-07-15 14:16:06.184707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.392 [2024-07-15 14:16:06.184722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.392 [2024-07-15 14:16:06.184737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.392 [2024-07-15 14:16:06.184743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.392 [2024-07-15 14:16:06.188275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.392 [2024-07-15 14:16:06.188295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.392 [2024-07-15 14:16:06.189182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.392 [2024-07-15 14:16:06.189219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.392 [2024-07-15 14:16:06.189230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.392 [2024-07-15 14:16:06.189472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.392 [2024-07-15 14:16:06.189696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.392 [2024-07-15 14:16:06.189704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.392 [2024-07-15 14:16:06.189713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.392 [2024-07-15 14:16:06.193277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.392 [2024-07-15 14:16:06.202495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.392 [2024-07-15 14:16:06.203216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.392 [2024-07-15 14:16:06.203254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.392 [2024-07-15 14:16:06.203264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.392 [2024-07-15 14:16:06.203504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.392 [2024-07-15 14:16:06.203727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.392 [2024-07-15 14:16:06.203736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.392 [2024-07-15 14:16:06.203744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.392 [2024-07-15 14:16:06.207318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.392 [2024-07-15 14:16:06.216341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.392 [2024-07-15 14:16:06.217001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.392 [2024-07-15 14:16:06.217038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.392 [2024-07-15 14:16:06.217049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.392 [2024-07-15 14:16:06.217289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.392 [2024-07-15 14:16:06.217512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.392 [2024-07-15 14:16:06.217520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.392 [2024-07-15 14:16:06.217528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.392 [2024-07-15 14:16:06.221109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.230347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.231034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.231071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.231082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.231322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.231545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.231553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.231561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.235128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.244146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.244747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.244772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.244780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.245001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.245220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.245228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.245234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.248790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.258007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.258593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.258608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.258620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.258845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.259065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.259072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.259079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.262631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.271854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.272387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.272402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.272409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.272628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.272853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.272861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.272868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.276418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.285849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.286395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.286411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.286419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.286638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.286869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.286879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.286886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.290439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.299662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.300329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.300366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.300376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.300616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.300848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.300861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.300869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.304427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.313647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.314289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.314327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.314337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.314576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.314806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.314815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.314822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.318377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.327604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.328266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.393 [2024-07-15 14:16:06.328302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.393 [2024-07-15 14:16:06.328313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.393 [2024-07-15 14:16:06.328552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.393 [2024-07-15 14:16:06.328783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.393 [2024-07-15 14:16:06.328792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.393 [2024-07-15 14:16:06.328799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.393 [2024-07-15 14:16:06.332354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.393 [2024-07-15 14:16:06.341566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.393 [2024-07-15 14:16:06.342139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.342158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.342166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.342386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.342605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.342614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.342621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.346173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.355386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.355966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.355982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.355990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.356210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.356428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.356435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.356442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.359995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.369199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.369756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.369771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.369779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.369998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.370216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.370224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.370230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.373778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.383194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.383747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.383766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.383774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.383992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.384211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.384218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.384225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.387775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.397189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.397794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.397824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.398051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.398271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.398278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.398285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.401838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.411045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.411655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.411704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.411953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.412177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.412185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.412192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.415743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.425045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.425609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.425645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.425656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.425902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.426126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.426134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.426141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.429693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.438907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.439610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.439657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.439905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.440129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.440137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.440149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.443703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.452707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.453283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.453320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.453332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.453572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.453802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.453811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.453819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.457375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.466598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.467275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.467311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.467322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.467561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.467793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.467802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.467810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.471365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.480583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.481264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.394 [2024-07-15 14:16:06.481301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.394 [2024-07-15 14:16:06.481312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.394 [2024-07-15 14:16:06.481551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.394 [2024-07-15 14:16:06.481781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.394 [2024-07-15 14:16:06.481790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.394 [2024-07-15 14:16:06.481798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.394 [2024-07-15 14:16:06.485353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.394 [2024-07-15 14:16:06.494569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.394 [2024-07-15 14:16:06.495243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.395 [2024-07-15 14:16:06.495280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.395 [2024-07-15 14:16:06.495290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.395 [2024-07-15 14:16:06.495529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.395 [2024-07-15 14:16:06.495760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.395 [2024-07-15 14:16:06.495769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.395 [2024-07-15 14:16:06.495776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.395 [2024-07-15 14:16:06.499332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.656 [2024-07-15 14:16:06.508542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.656 [2024-07-15 14:16:06.509100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-07-15 14:16:06.509119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.656 [2024-07-15 14:16:06.509127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.656 [2024-07-15 14:16:06.509347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.656 [2024-07-15 14:16:06.509566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.656 [2024-07-15 14:16:06.509574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.656 [2024-07-15 14:16:06.509581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.656 [2024-07-15 14:16:06.513134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.656 [2024-07-15 14:16:06.522358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.656 [2024-07-15 14:16:06.522937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-07-15 14:16:06.522954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.656 [2024-07-15 14:16:06.522961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.656 [2024-07-15 14:16:06.523181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.656 [2024-07-15 14:16:06.523400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.656 [2024-07-15 14:16:06.523407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.656 [2024-07-15 14:16:06.523414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.656 [2024-07-15 14:16:06.526965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.656 [2024-07-15 14:16:06.536171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.656 [2024-07-15 14:16:06.536712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.536727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.536734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.536963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.537183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.537191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.537197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.540743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.550163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.550745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.550765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.550773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.550991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.551210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.551219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.551225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.554774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.563994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.564666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.564704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.564714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.564961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.565185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.565193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.565200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.568758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.577971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.578570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.578596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.578820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.579041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.579049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.579060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.582614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.591830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.592370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.592384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.592392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.592611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.592836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.592844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.592851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.596398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.605817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.606405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.606453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.606692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.606923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.606932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.606939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.610493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.619707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.620278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.620296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.620304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.620524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.620743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.620750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.620763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.624312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.633518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.634035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.634055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.634062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.634281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.634500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.634508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.634514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.638066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.647479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.648203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.648239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.648250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.648489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.648712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.648719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.648727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.652292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.661296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.661832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.661869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.661881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.662124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.662347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.662355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.662362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.665925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.657 [2024-07-15 14:16:06.675138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.657 [2024-07-15 14:16:06.675731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-07-15 14:16:06.675748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.657 [2024-07-15 14:16:06.675763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.657 [2024-07-15 14:16:06.675984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.657 [2024-07-15 14:16:06.676210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.657 [2024-07-15 14:16:06.676218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.657 [2024-07-15 14:16:06.676225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.657 [2024-07-15 14:16:06.679776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.688981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.689519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.689559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.689569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.689815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.690039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.690047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.690054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.693606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.702829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.703418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.703436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.703444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.703664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.703890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.703898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.703905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.707452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.716662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.717232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.717248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.717255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.717474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.717693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.717700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.717708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.721277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.730484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.731143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.731180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.731190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.731429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.731652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.731660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.731668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.735232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.744449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.745110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.745147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.745157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.745397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.745620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.745628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.745636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.749196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.658 [2024-07-15 14:16:06.758410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.658 [2024-07-15 14:16:06.758888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-07-15 14:16:06.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.658 [2024-07-15 14:16:06.758935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.658 [2024-07-15 14:16:06.759174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.658 [2024-07-15 14:16:06.759397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.658 [2024-07-15 14:16:06.759406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.658 [2024-07-15 14:16:06.759413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.658 [2024-07-15 14:16:06.762973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.772392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.773089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.773125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.773140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.773380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.773602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.773611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.773618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.777180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.786391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.786878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.786914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.786925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.787164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.787386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.787394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.787401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.790965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.800382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.800975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.801011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.801021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.801261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.801483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.801491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.801498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.805059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.814272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.814854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.814873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.814880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.815101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.815320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.815332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.815339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.818893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.828103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.828688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.828703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.828710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.828934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.829153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.829161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.829168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.832713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.841912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.842493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.842507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.842514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.842733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.842957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.842965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.842971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.846516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.855714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.856301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.856316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.856323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.856542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.856766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.856774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.856780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.860326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.869530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.870192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.870228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.870239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.870478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.870701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.870709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.870717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.874280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.883492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.884179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.884215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.920 [2024-07-15 14:16:06.884225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.920 [2024-07-15 14:16:06.884464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.920 [2024-07-15 14:16:06.884687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.920 [2024-07-15 14:16:06.884696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.920 [2024-07-15 14:16:06.884704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.920 [2024-07-15 14:16:06.888267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.920 [2024-07-15 14:16:06.897485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.920 [2024-07-15 14:16:06.898135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.920 [2024-07-15 14:16:06.898171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.898182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.898421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.898644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.898652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.898659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.902224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.911436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.912070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.912106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.912117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.912360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.912583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.912591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.912599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.916160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.925382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.926065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.926102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.926112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.926352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.926574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.926582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.926590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.930150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.939359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.940028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.940064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.940075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.940314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.940536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.940545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.940552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.944114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.953323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.953882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.953919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.953931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.954173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.954396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.954404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.954415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.957977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.967195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.967871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.967907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.967918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.968157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.968380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.968388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.968395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.971957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.981170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.981870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.981907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.981919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.982161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.982384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.982392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.982399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.985960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:06.995167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:06.995861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:06.995897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:06.995907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:06.996146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:06.996369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:06.996378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:06.996385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:06.999946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:07.009156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:07.009844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:07.009881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:07.009891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:07.010130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:07.010353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:07.010361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:07.010369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:07.013931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.921 [2024-07-15 14:16:07.023145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.921 [2024-07-15 14:16:07.023805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.921 [2024-07-15 14:16:07.023842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:08.921 [2024-07-15 14:16:07.023853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:08.921 [2024-07-15 14:16:07.024096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:08.921 [2024-07-15 14:16:07.024318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.921 [2024-07-15 14:16:07.024326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.921 [2024-07-15 14:16:07.024334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.921 [2024-07-15 14:16:07.027901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.037131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.037720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.037765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.037776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.038016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.038239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.038247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.038254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.041814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.051031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.051713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.051749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.051767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.052011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.052235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.052243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.052250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.055811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.065023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.065615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.065633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.065641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.065867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.066087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.066094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.066101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.069644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.078847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.079436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.079450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.079458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.079676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.079901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.079909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.079916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.083460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.092658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.093207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.093223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.093231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.093449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.093668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.093675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.093685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.097234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.106446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.107090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.107126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.107136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.107376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.107599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.184 [2024-07-15 14:16:07.107607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.184 [2024-07-15 14:16:07.107614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.184 [2024-07-15 14:16:07.111176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.184 [2024-07-15 14:16:07.120383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.184 [2024-07-15 14:16:07.121072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.184 [2024-07-15 14:16:07.121109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.184 [2024-07-15 14:16:07.121119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.184 [2024-07-15 14:16:07.121359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.184 [2024-07-15 14:16:07.121582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.121590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.121597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.125168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.134384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.135053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.135089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.135099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.135338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.135561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.135569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.135576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.139141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.148345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.149025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.149066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.149077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.149316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.149539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.149547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.149554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.153114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.162324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.163000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.163037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.163047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.163286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.163509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.163517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.163524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.167088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.176302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.176980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.177017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.177027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.177266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.177489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.177497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.177504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.181065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.190273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.190706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.190725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.190733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.191014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.191240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.191250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.191257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.194813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.204232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.204900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.204937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.204949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.205192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.205414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.205422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.205430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.208991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.218075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.218767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.218803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.218815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.219058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.219280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.219288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.219296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.222862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.232070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.232667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.232685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.232692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.232919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.233139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.233147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.233154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.236705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.245903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.246443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.246458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.246465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.246684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.246909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.246918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.246924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.250469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.259877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.260544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.260581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.260591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.260840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.261064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.185 [2024-07-15 14:16:07.261072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.185 [2024-07-15 14:16:07.261079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.185 [2024-07-15 14:16:07.264629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.185 [2024-07-15 14:16:07.273840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.185 [2024-07-15 14:16:07.274405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.185 [2024-07-15 14:16:07.274423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.185 [2024-07-15 14:16:07.274431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.185 [2024-07-15 14:16:07.274650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.185 [2024-07-15 14:16:07.274876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.186 [2024-07-15 14:16:07.274885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.186 [2024-07-15 14:16:07.274891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.186 [2024-07-15 14:16:07.278438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.186 [2024-07-15 14:16:07.287636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.186 [2024-07-15 14:16:07.288181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.186 [2024-07-15 14:16:07.288196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.186 [2024-07-15 14:16:07.288208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.186 [2024-07-15 14:16:07.288427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.186 [2024-07-15 14:16:07.288646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.186 [2024-07-15 14:16:07.288653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.186 [2024-07-15 14:16:07.288660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.186 [2024-07-15 14:16:07.292214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.447 [2024-07-15 14:16:07.301429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.447 [2024-07-15 14:16:07.302013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.447 [2024-07-15 14:16:07.302028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.447 [2024-07-15 14:16:07.302036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.447 [2024-07-15 14:16:07.302255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.302475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.302482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.302489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.306042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.315257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.315835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.315851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.315858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.316077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.316295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.316304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.316310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.319860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.329084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.329721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.329765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.329776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.330015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.330238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.330251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.330258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.333820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.343032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.343633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.343650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.343658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.343884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.344104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.344111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.344118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.347664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.356875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.357563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.357609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.357858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.358082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.358090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.358098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.361648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.370852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.371397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.371434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.371444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.371683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.371917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.371928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.371935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.375487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.384713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.385400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.385436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.385446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.385685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.385917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.385926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.385934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.389488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.398690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.399319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.399366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.399605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.399837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.399846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.399853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.403404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.412614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.413189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.413225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.413236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.413476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.413698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.413706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.413713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.417276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.426493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.427125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.427162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.427174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.427418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.427642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.427650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.427657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.431219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.440427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.441001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.441037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.448 [2024-07-15 14:16:07.441049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.448 [2024-07-15 14:16:07.441289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.448 [2024-07-15 14:16:07.441512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.448 [2024-07-15 14:16:07.441520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.448 [2024-07-15 14:16:07.441527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.448 [2024-07-15 14:16:07.445086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.448 [2024-07-15 14:16:07.454231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.448 [2024-07-15 14:16:07.454858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.448 [2024-07-15 14:16:07.454895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.454907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.455149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.455372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.455380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.455387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.458943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.468152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.468744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.468767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.468775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.468995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.469214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.469222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.469233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.472783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.481981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.482522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.482537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.482545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.482769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.482989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.482997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.483003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.486548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.495956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.496497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.496512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.496519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.496737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.496962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.496970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.496977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.500520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.509931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.510592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.510629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.510640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.510888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.511113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.511122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.511129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.514680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.523893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.524554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.524590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.524601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.524848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.525072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.525080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.525088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.528636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.537845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.538509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.538545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.538555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.538804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.539028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.539036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.539043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.542593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.449 [2024-07-15 14:16:07.551799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.449 [2024-07-15 14:16:07.552471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.449 [2024-07-15 14:16:07.552507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.449 [2024-07-15 14:16:07.552518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.449 [2024-07-15 14:16:07.552766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.449 [2024-07-15 14:16:07.552990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.449 [2024-07-15 14:16:07.552998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.449 [2024-07-15 14:16:07.553005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.449 [2024-07-15 14:16:07.556762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.710 [2024-07-15 14:16:07.565782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.710 [2024-07-15 14:16:07.566325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.710 [2024-07-15 14:16:07.566343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.710 [2024-07-15 14:16:07.566351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.710 [2024-07-15 14:16:07.566571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.566803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.566812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.566818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.570370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.579573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.580124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.580139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.580146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.580365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.580584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.580591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.580598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.584147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.593553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.594075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.594090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.594098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.594316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.594535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.594542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.594549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.598098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.607506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.608047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.608062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.608069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.608288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.608507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.608514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.608521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.612076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.621477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.622062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.622077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.622085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.622303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.622522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.622530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.622536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.626093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.635295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.635863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.635878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.635885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.636103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.636322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.636329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.636336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.639886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.649087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.649742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.649786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.649797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.650036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.650259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.650267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.650275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.653842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.663064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.663739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.663785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.663800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.664040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.664262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.664270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.664278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.667840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.677064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.677656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.677674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.677682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.677909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.678128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.678136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.678143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.681693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.690917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.691569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.691605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.691615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.691862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.692086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.692095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.692103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.695655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.704887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.705576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.711 [2024-07-15 14:16:07.705612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.711 [2024-07-15 14:16:07.705622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.711 [2024-07-15 14:16:07.705871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.711 [2024-07-15 14:16:07.706099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.711 [2024-07-15 14:16:07.706108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.711 [2024-07-15 14:16:07.706116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.711 [2024-07-15 14:16:07.709670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.711 [2024-07-15 14:16:07.718895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.711 [2024-07-15 14:16:07.719563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.719599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.719610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.719859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.720083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.720091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.720099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.723669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.732896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.733550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.733587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.733597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.733844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.734068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.734077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.734084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.737636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.746852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.747532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.747569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.747579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.747829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.748052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.748060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.748068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.751621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.760842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.761390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.761408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.761416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.761635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.761862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.761870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.761877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.765426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.774839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.775418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.775434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.775441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.775660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.775883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.775891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.775898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.779443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.788646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.789125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.789140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.789148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.789366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.789584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.789592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.789599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.793148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.802566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.803097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.803113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.803123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.803342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.803561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.803569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.803575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.807133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.712 [2024-07-15 14:16:07.816561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.712 [2024-07-15 14:16:07.817090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.712 [2024-07-15 14:16:07.817105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.712 [2024-07-15 14:16:07.817112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.712 [2024-07-15 14:16:07.817331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.712 [2024-07-15 14:16:07.817549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.712 [2024-07-15 14:16:07.817557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.712 [2024-07-15 14:16:07.817564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.712 [2024-07-15 14:16:07.821119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.974 [2024-07-15 14:16:07.830554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.974 [2024-07-15 14:16:07.831102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 14:16:07.831117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.831124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.831343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.831561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.831569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.831576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.835129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.844548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.845091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.845106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.845113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.845332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.845551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.845563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.845570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.849127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.858340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.858888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.858904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.858911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.859130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.859349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.859356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.859363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.862918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.872151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.872778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.872814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.872825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.873065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.873288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.873296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.873304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.876871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.886083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.886769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.886804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.886816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.887058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.887281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.887289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.887297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.890862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.900088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.900688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.900706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.900713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.900940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.901159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.901167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.901174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.904723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.913948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.914501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.914516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.914523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.914742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.914966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.914975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.914982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.918533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.927772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.928350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.928366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.928373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.928592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.928816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.928825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.928831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.932384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.941599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.942164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.942179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.942186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.942409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.942627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.942635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.942642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.946199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.955420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.955957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.955973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.955980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.956199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.956418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.956425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.956432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.959989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.969414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.969970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 14:16:07.969985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.975 [2024-07-15 14:16:07.969993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.975 [2024-07-15 14:16:07.970212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.975 [2024-07-15 14:16:07.970431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.975 [2024-07-15 14:16:07.970439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.975 [2024-07-15 14:16:07.970446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.975 [2024-07-15 14:16:07.974001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.975 [2024-07-15 14:16:07.983222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.975 [2024-07-15 14:16:07.983766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:07.983781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:07.983788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:07.984007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:07.984226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:07.984234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:07.984245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:07.987803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:07.997021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:07.997615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:07.997629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:07.997636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:07.997860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:07.998080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:07.998088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:07.998094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.001646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.010873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.011412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.011426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.011433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.011652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.011878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.011886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.011893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.015443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.024940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.025448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.025463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.025470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.025689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.025913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.025921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.025928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.029478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.038902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.039482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.039500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.039508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.039726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.039951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.039960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.039966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.043516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.052736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.053319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.053334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.053341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.053560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.053785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.053793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.053800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.057348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.066567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.066930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.066944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.066952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.067171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.067390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.067397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.067404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.070962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.976 [2024-07-15 14:16:08.080388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.976 [2024-07-15 14:16:08.080951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 14:16:08.080967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:09.976 [2024-07-15 14:16:08.080974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:09.976 [2024-07-15 14:16:08.081193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:09.976 [2024-07-15 14:16:08.081418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.976 [2024-07-15 14:16:08.081426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.976 [2024-07-15 14:16:08.081433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.976 [2024-07-15 14:16:08.084991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.094205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.094757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.094773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.094780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.239 [2024-07-15 14:16:08.094999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.239 [2024-07-15 14:16:08.095219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.239 [2024-07-15 14:16:08.095227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.239 [2024-07-15 14:16:08.095234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.239 [2024-07-15 14:16:08.098788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.107999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.108545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.108559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.108567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.239 [2024-07-15 14:16:08.108793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.239 [2024-07-15 14:16:08.109013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.239 [2024-07-15 14:16:08.109020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.239 [2024-07-15 14:16:08.109028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.239 [2024-07-15 14:16:08.112577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.121797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.122472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.122509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.122520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.239 [2024-07-15 14:16:08.122776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.239 [2024-07-15 14:16:08.123000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.239 [2024-07-15 14:16:08.123008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.239 [2024-07-15 14:16:08.123016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.239 [2024-07-15 14:16:08.126576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.135798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.136395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.136412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.136420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.239 [2024-07-15 14:16:08.136640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.239 [2024-07-15 14:16:08.136865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.239 [2024-07-15 14:16:08.136873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.239 [2024-07-15 14:16:08.136880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.239 [2024-07-15 14:16:08.140430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.149642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.150332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.150370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.150380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.239 [2024-07-15 14:16:08.150620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.239 [2024-07-15 14:16:08.150849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.239 [2024-07-15 14:16:08.150859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.239 [2024-07-15 14:16:08.150866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.239 [2024-07-15 14:16:08.154423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.239 [2024-07-15 14:16:08.163647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.239 [2024-07-15 14:16:08.164335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.239 [2024-07-15 14:16:08.164371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.239 [2024-07-15 14:16:08.164383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.164623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.164853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.164861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.164869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.168423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.177633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.178319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.178355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.178371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.178610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.178840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.178849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.178857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.182409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.191617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.192206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.192224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.192233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.192452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.192672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.192680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.192687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.196242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.205452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.205996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.206013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.206020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.206240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.206458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.206466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.206473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.210023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.219439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.219971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.219986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.219993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.220212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.220431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.220442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.220449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.224011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.233433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.233926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.233963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.233975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.234216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.234439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.234448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.234455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.238018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.247315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.247885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.247921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.247933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.248177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.248400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.248408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.248416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.251975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.261189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.261833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.261870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.261882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.262122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.262345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.262353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.262361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.265921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.275153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.275703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.275721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.275729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.275953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.276173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.276180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.276187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.279735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.288947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.289634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.289649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.289656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.289880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.290099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.290108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.290115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.293660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.302871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.303538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.303574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.240 [2024-07-15 14:16:08.303585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.240 [2024-07-15 14:16:08.303831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.240 [2024-07-15 14:16:08.304055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.240 [2024-07-15 14:16:08.304063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.240 [2024-07-15 14:16:08.304071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.240 [2024-07-15 14:16:08.307626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.240 [2024-07-15 14:16:08.316844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.240 [2024-07-15 14:16:08.317493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.240 [2024-07-15 14:16:08.317530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.241 [2024-07-15 14:16:08.317546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.241 [2024-07-15 14:16:08.317793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.241 [2024-07-15 14:16:08.318016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.241 [2024-07-15 14:16:08.318025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.241 [2024-07-15 14:16:08.318032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.241 [2024-07-15 14:16:08.321585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.241 [2024-07-15 14:16:08.330815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.241 [2024-07-15 14:16:08.331258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-07-15 14:16:08.331277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.241 [2024-07-15 14:16:08.331285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.241 [2024-07-15 14:16:08.331505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.241 [2024-07-15 14:16:08.331725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.241 [2024-07-15 14:16:08.331733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.241 [2024-07-15 14:16:08.331740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.241 [2024-07-15 14:16:08.335296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.241 [2024-07-15 14:16:08.344759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.241 [2024-07-15 14:16:08.345438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.241 [2024-07-15 14:16:08.345474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.241 [2024-07-15 14:16:08.345484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.241 [2024-07-15 14:16:08.345723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.241 [2024-07-15 14:16:08.345957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.241 [2024-07-15 14:16:08.345967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.241 [2024-07-15 14:16:08.345974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.241 [2024-07-15 14:16:08.349529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.358747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.359417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.359453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.359464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.359703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.359934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.359947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.359955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.363512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.372734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.373236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.373255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.373263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.373483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.373702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.373709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.373716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.377270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.386687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.387104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.387119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.387127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.387346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.387565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.387572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.387579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.391130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.400542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.401085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.401100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.401107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.401326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.401544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.401551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.401558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.405116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.414401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.414984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.415001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.415008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.415227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.415445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.415453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.415459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.419011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.428230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.428797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.428820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.428828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.429051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.429271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.429278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.429285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.432841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.442049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.442717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.442761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.442773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.443016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.443239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.443247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.443254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.446812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.456039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.456718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.456761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.456774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.457021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.457244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.504 [2024-07-15 14:16:08.457252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.504 [2024-07-15 14:16:08.457259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.504 [2024-07-15 14:16:08.460818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.504 [2024-07-15 14:16:08.470035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.504 [2024-07-15 14:16:08.470743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.504 [2024-07-15 14:16:08.470785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.504 [2024-07-15 14:16:08.470797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.504 [2024-07-15 14:16:08.471037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.504 [2024-07-15 14:16:08.471260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.471268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.471276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.474834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.483838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.484484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.484521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.484531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.484778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.485001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.485009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.485016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.488569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.497795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.498438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.498475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.498485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.498725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.498955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.498964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.498976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.502531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.511745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.512309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.512345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.512356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.512595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.512828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.512838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.512845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.516399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.525625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.526252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.526289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.526299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.526538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.526770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.526779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.526786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.530341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.539549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.540234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.540271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.540281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.540520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.540743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.540760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.540768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.544321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.553529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.554117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.554140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.554148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.554367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.554586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.554594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.554601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.558347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.567353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.568007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.568043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.568054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.568293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.568516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.568524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.568531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.572092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.581304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.582033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.582080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.582320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.582542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.582551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.582558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.586120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.595119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.595804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.595840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.595852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.596094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.596321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.596330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.596337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.599918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.505 [2024-07-15 14:16:08.608920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.505 [2024-07-15 14:16:08.609585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.505 [2024-07-15 14:16:08.609621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.505 [2024-07-15 14:16:08.609632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.505 [2024-07-15 14:16:08.609879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.505 [2024-07-15 14:16:08.610103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.505 [2024-07-15 14:16:08.610111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.505 [2024-07-15 14:16:08.610118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.505 [2024-07-15 14:16:08.613675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.622908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.623554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.623590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.623601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.623849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.624073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.624081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.624088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.627643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.636861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.637401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.637437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.637448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.637687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.637919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.637929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.637936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.641495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.650703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.651387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.651424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.651434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.651673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.651904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.651913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.651920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.655475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.664692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.665361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.665397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.665408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.665647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.665878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.665887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.665895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.669451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.678661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.679234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.679271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.679281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.679521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.679743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.679760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.679768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.683322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.692540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.768 [2024-07-15 14:16:08.693188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-07-15 14:16:08.693225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.768 [2024-07-15 14:16:08.693240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.768 [2024-07-15 14:16:08.693479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.768 [2024-07-15 14:16:08.693702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.768 [2024-07-15 14:16:08.693711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.768 [2024-07-15 14:16:08.693718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.768 [2024-07-15 14:16:08.697285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.768 [2024-07-15 14:16:08.706510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.707170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.707207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.707217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.707457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.707679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.707687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.707695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.711255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.720464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.721134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.721170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.721180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.721419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.721642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.721650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.721658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.725233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.734447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.735130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.735167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.735177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.735417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.735639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.735653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.735661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.739222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.748438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.749113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.749150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.749160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.749399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.749622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.749631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.749638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.753197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.762410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.763116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.763126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.763366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.763588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.763596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.763604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.767163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.776377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.776945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.776982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.776993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.777233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.777455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.777463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.777471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.781031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.790254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.790849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.790867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.790875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.791095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.791313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.791321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.791328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.794874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.804080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.804608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.804623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.804630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.804855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.805074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.805082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.805089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.808633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.818046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.818584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.818598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.818606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.818830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.819049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.819057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.819063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.822608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.832028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.832698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.832744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.832996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.833219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.833228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.833235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.836792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.769 [2024-07-15 14:16:08.846006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.769 [2024-07-15 14:16:08.846691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-07-15 14:16:08.846728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.769 [2024-07-15 14:16:08.846739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.769 [2024-07-15 14:16:08.846990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.769 [2024-07-15 14:16:08.847214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.769 [2024-07-15 14:16:08.847222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.769 [2024-07-15 14:16:08.847229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.769 [2024-07-15 14:16:08.850781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.770 [2024-07-15 14:16:08.859804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.770 [2024-07-15 14:16:08.860458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-07-15 14:16:08.860494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.770 [2024-07-15 14:16:08.860506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.770 [2024-07-15 14:16:08.860749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.770 [2024-07-15 14:16:08.860981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.770 [2024-07-15 14:16:08.860989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.770 [2024-07-15 14:16:08.860997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.770 [2024-07-15 14:16:08.864550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.770 [2024-07-15 14:16:08.873775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.770 [2024-07-15 14:16:08.874457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-07-15 14:16:08.874493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:10.770 [2024-07-15 14:16:08.874503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:10.770 [2024-07-15 14:16:08.874743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:10.770 [2024-07-15 14:16:08.874974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.770 [2024-07-15 14:16:08.874983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.770 [2024-07-15 14:16:08.874995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.770 [2024-07-15 14:16:08.878549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.887761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.888305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.888341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.888353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.888594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.888825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.888835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.888842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.892396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.901602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.902224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.902261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.902271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.902510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.902733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.902741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.902748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.906309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.915519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.916183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.916219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.916230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.916469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.916692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.916700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.916707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.920272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.929499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.930196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.930243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.930482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.930705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.930713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.930720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.934285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.943498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.944152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.944189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.944199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.944439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.944662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.944670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.944677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.948244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.957464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.958136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.958172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.958183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.958422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.958644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.958653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.958660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.962221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.971427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.972106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.972142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.972154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.033 [2024-07-15 14:16:08.972399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.033 [2024-07-15 14:16:08.972622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.033 [2024-07-15 14:16:08.972630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.033 [2024-07-15 14:16:08.972637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.033 [2024-07-15 14:16:08.976199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.033 [2024-07-15 14:16:08.985416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.033 [2024-07-15 14:16:08.985894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.033 [2024-07-15 14:16:08.985930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.033 [2024-07-15 14:16:08.985942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:08.986181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:08.986404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:08.986412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:08.986419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:08.989981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:08.999412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.000071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.000118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.000358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.000581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.000589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.000596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.004156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.013368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.014066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.014103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.014114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.014353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.014576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.014585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.014596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.018161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.027173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.027763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.027781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.027789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.028009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.028227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.028235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.028242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.031791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.040994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.041530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.041545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.041552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.041776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.041996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.042003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.042010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.045557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.054974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.055558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.055573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.055580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.055804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.056024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.056031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.056038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.059582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.068782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.069452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.069492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.069503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.069742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.069973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.069982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.069990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.073543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.082763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.083423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.083459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.083470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.083710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.083941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.083950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.083957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.087509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.096721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.097369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.097406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.097416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.097656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.097887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.097895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.097903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.101456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.110665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.111360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.111396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.111407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.111646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.111883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.111892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.111899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.115452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.124669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.125311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.125348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.125358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.034 [2024-07-15 14:16:09.125598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.034 [2024-07-15 14:16:09.125829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.034 [2024-07-15 14:16:09.125838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.034 [2024-07-15 14:16:09.125846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.034 [2024-07-15 14:16:09.129399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.034 [2024-07-15 14:16:09.138605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.034 [2024-07-15 14:16:09.139165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.034 [2024-07-15 14:16:09.139183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.034 [2024-07-15 14:16:09.139191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.035 [2024-07-15 14:16:09.139410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.035 [2024-07-15 14:16:09.139629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.035 [2024-07-15 14:16:09.139637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.035 [2024-07-15 14:16:09.139644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.035 [2024-07-15 14:16:09.143197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.152401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.152946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.152962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.152969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.153189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.153407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.153415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.153422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.156983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.166197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.166760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.166776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.166784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.167003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.167222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.167229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.167236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.170784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1564927 Killed "${NVMF_APP[@]}" "$@" 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 [2024-07-15 14:16:09.180195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.180887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.180924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.180936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.181179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.181401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.181409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.181417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.184976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1566627 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1566627 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1566627 ']' 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.298 14:16:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 [2024-07-15 14:16:09.194187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.194659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.194677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.194685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.194911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.195131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.195140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.195147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.198696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.208122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.208779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.208816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.208827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.209069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.209292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.209301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.209309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.212872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.222092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.222509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.222527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.222535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.222760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.222981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.222989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.222996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.226556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.234801] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:11.298 [2024-07-15 14:16:09.234846] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.298 [2024-07-15 14:16:09.235975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.236631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.236669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.236681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.236929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.237154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.237163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.237171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.298 [2024-07-15 14:16:09.240723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.298 [2024-07-15 14:16:09.249934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.298 [2024-07-15 14:16:09.250576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.298 [2024-07-15 14:16:09.250614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.298 [2024-07-15 14:16:09.250625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.298 [2024-07-15 14:16:09.250873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.298 [2024-07-15 14:16:09.251098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.298 [2024-07-15 14:16:09.251108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.298 [2024-07-15 14:16:09.251116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.254671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.263887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.264441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.264460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.264468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.264688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.264913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.264923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.264930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.268479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.299 [2024-07-15 14:16:09.277761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.278426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.278463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.278474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.278718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.278952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.278962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.278970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.282524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.291741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.292331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.292350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.292358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.292578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.292804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.292812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.292820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.296368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.305575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.306108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.306124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.306132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.306351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.306571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.306580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.306587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.310139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.319549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.320208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.320257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.320496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.320720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.320730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.320742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.320818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.299 [2024-07-15 14:16:09.324317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.333545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.334231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.334269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.334279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.334519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.334743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.334760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.334769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.338324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.347546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.348132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.348152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.348160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.348381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.348601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.348610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.348617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.352173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.361388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.361955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.361972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.361980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.362200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.362420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.362429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.362436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.365989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.374264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.299 [2024-07-15 14:16:09.374290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.299 [2024-07-15 14:16:09.374296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.299 [2024-07-15 14:16:09.374301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.299 [2024-07-15 14:16:09.374305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.299 [2024-07-15 14:16:09.374404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.299 [2024-07-15 14:16:09.374560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.299 [2024-07-15 14:16:09.374561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.299 [2024-07-15 14:16:09.375205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.375871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.375910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.375923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.376166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.376391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.376400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.376408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.299 [2024-07-15 14:16:09.379973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.299 [2024-07-15 14:16:09.389190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.299 [2024-07-15 14:16:09.389852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.299 [2024-07-15 14:16:09.389892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.299 [2024-07-15 14:16:09.389905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.299 [2024-07-15 14:16:09.390150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.299 [2024-07-15 14:16:09.390374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.299 [2024-07-15 14:16:09.390383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.299 [2024-07-15 14:16:09.390391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.300 [2024-07-15 14:16:09.393954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.300 [2024-07-15 14:16:09.403170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.300 [2024-07-15 14:16:09.403895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.300 [2024-07-15 14:16:09.403934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.300 [2024-07-15 14:16:09.403945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.300 [2024-07-15 14:16:09.404186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.300 [2024-07-15 14:16:09.404410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.300 [2024-07-15 14:16:09.404424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.300 [2024-07-15 14:16:09.404432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.300 [2024-07-15 14:16:09.407997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.417006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.417614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.417634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.417642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.417868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.418089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.418098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.418105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.421650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.430879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.431426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.431442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.431450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.431669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.432032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.432043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.432050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.435597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.444810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.445337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.445375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.445386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.445626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.445860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.445870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.445878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.449435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.458655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.459328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.459366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.459377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.459616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.459847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.459857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.459865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.463418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.472634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.473317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.473355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.473366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.473605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.473837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.473847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.473854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.477410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.486629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.487153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.487191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.487202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.487441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.564 [2024-07-15 14:16:09.487666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.564 [2024-07-15 14:16:09.487675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.564 [2024-07-15 14:16:09.487682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.564 [2024-07-15 14:16:09.491244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.564 [2024-07-15 14:16:09.500455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.564 [2024-07-15 14:16:09.501067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.564 [2024-07-15 14:16:09.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.564 [2024-07-15 14:16:09.501095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.564 [2024-07-15 14:16:09.501323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.501543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.501551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.501558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.505112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.514319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.515036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.515074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.515085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.515325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.515548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.515558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.515566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.519128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.528140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.528726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.528770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.528781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.529021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.529245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.529254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.529262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.532818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.542036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.542742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.542787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.542799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.543038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.543262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.543272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.543283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.546843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.556060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.556605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.556643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.556654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.556901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.557126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.557135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.557143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.560695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.569911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.570615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.570652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.570663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.570911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.571135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.571146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.571154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.574707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.583712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.584411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.584448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.584459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.584699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.584931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.584941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.584949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.588503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.597513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.598046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.565 [2024-07-15 14:16:09.598084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.565 [2024-07-15 14:16:09.598095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.565 [2024-07-15 14:16:09.598334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.565 [2024-07-15 14:16:09.598558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.565 [2024-07-15 14:16:09.598567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.565 [2024-07-15 14:16:09.598575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.565 [2024-07-15 14:16:09.602156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.565 [2024-07-15 14:16:09.611384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.565 [2024-07-15 14:16:09.611987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.566 [2024-07-15 14:16:09.612006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.566 [2024-07-15 14:16:09.612014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.566 [2024-07-15 14:16:09.612234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.566 [2024-07-15 14:16:09.612455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.566 [2024-07-15 14:16:09.612463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.566 [2024-07-15 14:16:09.612470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.566 [2024-07-15 14:16:09.616023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.566 [2024-07-15 14:16:09.625291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.566 [2024-07-15 14:16:09.625841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.566 [2024-07-15 14:16:09.625858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.566 [2024-07-15 14:16:09.625865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.566 [2024-07-15 14:16:09.626085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.566 [2024-07-15 14:16:09.626305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.566 [2024-07-15 14:16:09.626315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.566 [2024-07-15 14:16:09.626323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.566 [2024-07-15 14:16:09.629874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.566 [2024-07-15 14:16:09.639291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.566 [2024-07-15 14:16:09.639837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.566 [2024-07-15 14:16:09.639875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.566 [2024-07-15 14:16:09.639886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.566 [2024-07-15 14:16:09.640126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.566 [2024-07-15 14:16:09.640354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.566 [2024-07-15 14:16:09.640363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.566 [2024-07-15 14:16:09.640371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.566 [2024-07-15 14:16:09.643934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.566 [2024-07-15 14:16:09.653152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.566 [2024-07-15 14:16:09.653593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.566 [2024-07-15 14:16:09.653616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.566 [2024-07-15 14:16:09.653625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.566 [2024-07-15 14:16:09.653857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.566 [2024-07-15 14:16:09.654079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.566 [2024-07-15 14:16:09.654088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.566 [2024-07-15 14:16:09.654095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.566 [2024-07-15 14:16:09.657643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.566 [2024-07-15 14:16:09.667066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.566 [2024-07-15 14:16:09.667760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.566 [2024-07-15 14:16:09.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.566 [2024-07-15 14:16:09.667809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.566 [2024-07-15 14:16:09.668053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.566 [2024-07-15 14:16:09.668277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.566 [2024-07-15 14:16:09.668289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.566 [2024-07-15 14:16:09.668296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.566 [2024-07-15 14:16:09.671857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.829 [2024-07-15 14:16:09.680863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.829 [2024-07-15 14:16:09.681525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.829 [2024-07-15 14:16:09.681563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.829 [2024-07-15 14:16:09.681574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.829 [2024-07-15 14:16:09.681821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.829 [2024-07-15 14:16:09.682046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.829 [2024-07-15 14:16:09.682056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.829 [2024-07-15 14:16:09.682064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.829 [2024-07-15 14:16:09.685622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.829 [2024-07-15 14:16:09.694848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.829 [2024-07-15 14:16:09.695276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.829 [2024-07-15 14:16:09.695295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.829 [2024-07-15 14:16:09.695303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.829 [2024-07-15 14:16:09.695523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.829 [2024-07-15 14:16:09.695743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.829 [2024-07-15 14:16:09.695757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.829 [2024-07-15 14:16:09.695765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.829 [2024-07-15 14:16:09.699315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.829 [2024-07-15 14:16:09.708745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.829 [2024-07-15 14:16:09.709183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.829 [2024-07-15 14:16:09.709199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.709207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.709426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.709646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.709655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.709662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.713214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.722630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.723315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.723353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.723364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.723604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.723835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.723846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.723853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.727421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.736435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.736903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.736958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.737201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.737425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.737435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.737443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.741006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.750433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.751125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.751174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.751413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.751637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.751647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.751656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.755218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.764441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.765143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.765181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.765192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.765431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.765655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.765664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.765672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.769234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.778240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.778961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.778999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.779010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.779249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.779478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.779488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.779495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.783057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.792063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.792662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.792681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.792689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.792915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.793136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.793145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.793152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.796699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.805907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.806497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.806514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.806521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.806741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.806967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.806976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.806983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.810529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.819739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.820299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.820315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.820323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.820543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.820768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.820778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.820785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.824335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.833561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.834119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.834157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.834170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.834411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.834635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.834645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.834653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.838214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.847438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.848127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.830 [2024-07-15 14:16:09.848165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.830 [2024-07-15 14:16:09.848176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.830 [2024-07-15 14:16:09.848416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.830 [2024-07-15 14:16:09.848640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.830 [2024-07-15 14:16:09.848650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.830 [2024-07-15 14:16:09.848657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.830 [2024-07-15 14:16:09.852221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.830 [2024-07-15 14:16:09.861443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.830 [2024-07-15 14:16:09.862168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.862206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.862217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.862457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.862681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.862690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.862698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.866263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.831 [2024-07-15 14:16:09.875276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.831 [2024-07-15 14:16:09.875886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.875924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.875940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.876183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.876406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.876416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.876423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.879984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.831 [2024-07-15 14:16:09.889207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.831 [2024-07-15 14:16:09.889876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.889914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.889926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.890168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.890392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.890401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.890410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.893972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.831 [2024-07-15 14:16:09.903193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.831 [2024-07-15 14:16:09.903769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.903808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.903818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.904058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.904282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.904291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.904299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.907855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.831 [2024-07-15 14:16:09.917077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.831 [2024-07-15 14:16:09.917520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.917538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.917547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.917772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.917993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.918006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.918014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.921562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.831 [2024-07-15 14:16:09.930995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.831 [2024-07-15 14:16:09.931581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.831 [2024-07-15 14:16:09.931597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:11.831 [2024-07-15 14:16:09.931606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:11.831 [2024-07-15 14:16:09.931830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:11.831 [2024-07-15 14:16:09.932051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.831 [2024-07-15 14:16:09.932060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.831 [2024-07-15 14:16:09.932067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.831 [2024-07-15 14:16:09.935612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.094 [2024-07-15 14:16:09.944827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.094 [2024-07-15 14:16:09.945245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.094 [2024-07-15 14:16:09.945260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.094 [2024-07-15 14:16:09.945269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.094 [2024-07-15 14:16:09.945488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.094 [2024-07-15 14:16:09.945707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.094 [2024-07-15 14:16:09.945717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.094 [2024-07-15 14:16:09.945724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.094 [2024-07-15 14:16:09.949279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.094 [2024-07-15 14:16:09.958694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.094 [2024-07-15 14:16:09.959120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.094 [2024-07-15 14:16:09.959136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:09.959145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:09.959364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:09.959584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:09.959593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:09.959601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:09.963153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:09.972570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:09.973138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:09.973154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:09.973162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:09.973381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:09.973601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:09.973610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:09.973617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:09.977199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:09.986410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:09.987098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:09.987136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:09.987147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:09.987387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:09.987610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:09.987621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:09.987628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:09.991190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:10.000411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.000983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.001024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.001036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.001277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.001501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.001512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.001519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.005561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.095 [2024-07-15 14:16:10.014381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.015089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.015127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.015139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.015380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.015604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.015613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.015621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.019184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:10.028382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.029129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.029167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.029178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.029418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.029643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.029652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.029661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.033237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:10.042264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.042845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.042884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.042896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.043138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.043362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.043372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.043379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.046947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.095 [2024-07-15 14:16:10.056218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.056574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.056594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.056603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.056837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.057067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.057077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.057084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.060409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.095 [2024-07-15 14:16:10.060710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.095 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.095 [2024-07-15 14:16:10.070149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.070708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.070725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.070733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.070958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.071178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.071187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.071194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.074785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.095 [2024-07-15 14:16:10.084158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.095 [2024-07-15 14:16:10.084713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.095 [2024-07-15 14:16:10.084731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.095 [2024-07-15 14:16:10.084740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.095 [2024-07-15 14:16:10.084967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.095 [2024-07-15 14:16:10.085193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.095 [2024-07-15 14:16:10.085202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.095 [2024-07-15 14:16:10.085210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.095 [2024-07-15 14:16:10.088765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.096 Malloc0 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.096 [2024-07-15 14:16:10.097988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.096 [2024-07-15 14:16:10.098463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.096 [2024-07-15 14:16:10.098502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.096 [2024-07-15 14:16:10.098512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.096 [2024-07-15 14:16:10.098761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.096 [2024-07-15 14:16:10.098985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.096 [2024-07-15 14:16:10.098995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.096 [2024-07-15 14:16:10.099003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.096 [2024-07-15 14:16:10.102557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.096 [2024-07-15 14:16:10.111785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.096 [2024-07-15 14:16:10.112334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.096 [2024-07-15 14:16:10.112372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.096 [2024-07-15 14:16:10.112384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.096 [2024-07-15 14:16:10.112623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.096 [2024-07-15 14:16:10.112855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.096 [2024-07-15 14:16:10.112866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.096 [2024-07-15 14:16:10.112873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.096 [2024-07-15 14:16:10.116438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.096 [2024-07-15 14:16:10.125677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.096 [2024-07-15 14:16:10.126387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.096 [2024-07-15 14:16:10.126425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd540 with addr=10.0.0.2, port=4420 00:30:12.096 [2024-07-15 14:16:10.126436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd540 is same with the state(5) to be set 00:30:12.096 [2024-07-15 14:16:10.126675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd540 (9): Bad file descriptor 00:30:12.096 [2024-07-15 14:16:10.126815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.096 [2024-07-15 14:16:10.126912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.096 [2024-07-15 14:16:10.126924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.096 [2024-07-15 14:16:10.126932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.096 [2024-07-15 14:16:10.130486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.096 14:16:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1565565 00:30:12.096 [2024-07-15 14:16:10.139500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.096 [2024-07-15 14:16:10.179513] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:22.123 00:30:22.123 Latency(us) 00:30:22.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:22.123 Verification LBA range: start 0x0 length 0x4000 00:30:22.123 Nvme1n1 : 15.00 8476.17 33.11 9741.30 0.00 7000.38 566.61 17148.59 00:30:22.123 =================================================================================================================== 00:30:22.123 Total : 8476.17 33.11 9741.30 0.00 7000.38 566.61 17148.59 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:22.123 rmmod nvme_tcp 00:30:22.123 rmmod nvme_fabrics 00:30:22.123 rmmod nvme_keyring 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1566627 ']' 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1566627 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1566627 ']' 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1566627 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1566627 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1566627' 00:30:22.123 killing process with pid 1566627 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1566627 00:30:22.123 14:16:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1566627 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.123 14:16:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.096 14:16:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:23.096 00:30:23.096 real 0m28.681s 00:30:23.096 user 1m3.424s 00:30:23.096 sys 0m7.694s 00:30:23.096 14:16:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.096 14:16:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.096 ************************************ 00:30:23.096 END TEST nvmf_bdevperf 00:30:23.096 ************************************ 00:30:23.096 14:16:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:23.096 14:16:21 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:23.096 14:16:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:23.096 14:16:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.096 14:16:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.358 ************************************ 00:30:23.358 START TEST nvmf_target_disconnect 00:30:23.358 ************************************ 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:23.358 * Looking for test storage... 00:30:23.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:23.358 14:16:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:31.504 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:31.504 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:31.504 Found net devices under 0000:31:00.0: cvl_0_0 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.504 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:31.505 Found net devices under 0000:31:00.1: cvl_0_1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:30:31.505 00:30:31.505 --- 10.0.0.2 ping statistics --- 00:30:31.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.505 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:30:31.505 00:30:31.505 --- 10.0.0.1 ping statistics --- 00:30:31.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.505 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.505 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:31.766 ************************************ 00:30:31.766 START TEST nvmf_target_disconnect_tc1 00:30:31.766 ************************************ 00:30:31.766 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:31.766 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.767 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.767 [2024-07-15 14:16:29.748418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.767 [2024-07-15 14:16:29.748475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94c4b0 with addr=10.0.0.2, port=4420 00:30:31.767 [2024-07-15 14:16:29.748505] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:31.767 [2024-07-15 14:16:29.748522] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:31.767 [2024-07-15 14:16:29.748531] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:31.767 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:31.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:31.767 Initializing NVMe Controllers 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:31.767 00:30:31.767 real 0m0.121s 00:30:31.767 user 0m0.058s 00:30:31.767 sys 0m0.062s 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:31.767 ************************************ 00:30:31.767 END TEST nvmf_target_disconnect_tc1 00:30:31.767 ************************************ 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:31.767 ************************************ 00:30:31.767 START TEST nvmf_target_disconnect_tc2 00:30:31.767 ************************************ 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1573298 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1573298 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1573298 ']' 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.767 14:16:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.028 [2024-07-15 14:16:29.904044] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:32.028 [2024-07-15 14:16:29.904088] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.028 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.028 [2024-07-15 14:16:29.992098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.029 [2024-07-15 14:16:30.083933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.029 [2024-07-15 14:16:30.083990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.029 [2024-07-15 14:16:30.083998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.029 [2024-07-15 14:16:30.084006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.029 [2024-07-15 14:16:30.084012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.029 [2024-07-15 14:16:30.084166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.029 [2024-07-15 14:16:30.084368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.029 [2024-07-15 14:16:30.084535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:32.029 [2024-07-15 14:16:30.084537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 Malloc0 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 [2024-07-15 14:16:30.798902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 [2024-07-15 14:16:30.827202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1573383 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:32.972 14:16:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.972 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.893 14:16:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1573298 00:30:34.893 14:16:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Write completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 Read completed with error (sct=0, sc=8) 00:30:34.893 starting I/O failed 00:30:34.893 [2024-07-15 14:16:32.856251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.893 [2024-07-15 14:16:32.856649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.856670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.857123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.857161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.857481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.857496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.857695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.857712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.858144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.858183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.858508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.858521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.858977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.859014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.859334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.859348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.859677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.859689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.859943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.893 [2024-07-15 14:16:32.859955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.893 qpair failed and we were unable to recover it. 00:30:34.893 [2024-07-15 14:16:32.860254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.860265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.860569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.860581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.860954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.860965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.861260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.861272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.861442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.861454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.861789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.861801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.862012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.862023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.862381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.862393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.862623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.862635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.862849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.862860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.863129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.863141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.863351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.863362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.863602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.863614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.863928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.863940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.864248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.864260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.864482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.864494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.864721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.864731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.864866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.864878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.865152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.865164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.865456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.865468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.865692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.865703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.866103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.866115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.866325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.866338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.866548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.866561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.866822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.866834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.867043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.867054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.867354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.867366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.867660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.867671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.868021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.868033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.868266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.868278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.868487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.868498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.868674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.868686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.869014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.869026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.869374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.869386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.869720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.869734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.870071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.870083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.870412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.870424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.870773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.870785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.871146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.871158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.871508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.871519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.871836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.871847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.872173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.872185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.872529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.872539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.872736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.872747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.873073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.873084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.873422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.873433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.873649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.873659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.873987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.873998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.874364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.874375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.874718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.874730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.875098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.875109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.875426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.875437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.875780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.875791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.876117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.876129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.876451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.876461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.876855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.876866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.877202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.877213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.877527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.877538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.877879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.877890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.878192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.878204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.878538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.878848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.878861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.879180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.879191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.879506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.879518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.879860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.879872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.880162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.880173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.880512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.880525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.894 [2024-07-15 14:16:32.880823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.894 [2024-07-15 14:16:32.880834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.894 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.881031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.881041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.881324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.881333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.881478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.881489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.881811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.881822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.882160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.882171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.882490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.882501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.882826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.882838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.883130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.883141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.883468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.883478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.883819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.883830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.884158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.884169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.884485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.884497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.884844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.884855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.885167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.885179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.885505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.885516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.885858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.885871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.886157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.886167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.886501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.886511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.886879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.887166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.887176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.887466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.887476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.887766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.887778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.888082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.888093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.888403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.888414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.888757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.888768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.889046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.889057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.889369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.889379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.889726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.889977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.889988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.890180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.890191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.890506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.890518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.890838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.890849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.891189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.891200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.891490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.891500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.891831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.891845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.892147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.892158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.892485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.892496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.892691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.892701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.892997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.893008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.893354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.893366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.893577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.893587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.893883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.893894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.894211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.894222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.894569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.894579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.894928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.894939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.895257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.895268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.895589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.895601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.895818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.895829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.896019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.896031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.896361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.896372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.896672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.896684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.896989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.897000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.897294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.897306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.897643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.897654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.898006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.898017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.898336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.898347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.898624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.898635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.898947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.898958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.899250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.899260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.899589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.899600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.899771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.899784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.900090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.900101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.900419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.900430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.900734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.900745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.901058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.901068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.901377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.901389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.901726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.895 [2024-07-15 14:16:32.901737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.895 qpair failed and we were unable to recover it. 00:30:34.895 [2024-07-15 14:16:32.902098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.902109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.902421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.902434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.902754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.902767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.903086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.903097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.903408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.903420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.903746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.903761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.904060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.904071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.904398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.904409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.904719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.904731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.905070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.905081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.905389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.905401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.905698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.905708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.906034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.906046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.906355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.906365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.906694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.906706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.907018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.907029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.907376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.907387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.907731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.907743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.907966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.907976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.908258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.908269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.908610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.908621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.908940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.908952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.909293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.909304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.910159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.910182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.910518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.910531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.910881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.910892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.911236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.911247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.911589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.911601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.911695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.911705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.911911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.911922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.912219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.912232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.912576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.912912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.912924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.913269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.913281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.913511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.913522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.913837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.913853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.914209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.914220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.914562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.914573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.914810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.914822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.915164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.915176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.915519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.915529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.915847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.915859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.916213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.916224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.916594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.916605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.916940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.916951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.917296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.917307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.917655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.917666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.917863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.917872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.918252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.918262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.918612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.918625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.918974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.918985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.919324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.919335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.919686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.919696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.920026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.920037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.920375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.920386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.920698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.920708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.921027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.921039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.921377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.921697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.896 [2024-07-15 14:16:32.921708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.896 qpair failed and we were unable to recover it. 00:30:34.896 [2024-07-15 14:16:32.922021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.922033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.922373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.922725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.922735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.923044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.923055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.923389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.923399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.923745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.923760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.924154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.924166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.924511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.924522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.924858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.924869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.925173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.925185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.925390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.925401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.925738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.925749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.926084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.926401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.926413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.926713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.926724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.927028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.927039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.927360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.927371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.927706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.927718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.928076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.928087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.928403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.928414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.928762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.928774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.929068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.929079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.929399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.929410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.929755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.929766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.930089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.930100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.930417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.930429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.930779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.930791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.931118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.931129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.931448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.931676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.931687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.932008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.932019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.932340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.932352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.932542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.932553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.932890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.932901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.933219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.933230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.933574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.933585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.933907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.933919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.934253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.934264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.934615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.934625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.934964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.934975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.935288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.935298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.935640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.935651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.935988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.935998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.936361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.936372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.936713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.937035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.937046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.937411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.937423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.937728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.937740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.938056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.938067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.938391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.938403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.938707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.938719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.939059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.939071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.939300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.939311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.939624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.939636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.939982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.940146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.940156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.940503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.940513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.940836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.940848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.941180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.941191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.941534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.941545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.941855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.941866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.942183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.942196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.942536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.942547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.942871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.942882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.943221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.943231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.943518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.943528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.943834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.943844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.944169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.944181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.944526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.944537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.897 qpair failed and we were unable to recover it. 00:30:34.897 [2024-07-15 14:16:32.944881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.897 [2024-07-15 14:16:32.944893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.945216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.945226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.945567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.945578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.945809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.945820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.946126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.946138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.946482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.946494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.946852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.946863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.947183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.947194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.947537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.947548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.947873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.947885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.948212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.948223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.948560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.948571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.948871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.948882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.949204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.949214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.949556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.949567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.949908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.949928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.950292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.950304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.950518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.950528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.950849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.950861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.951180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.951192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.951533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.951543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.951950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.951961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.952272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.952283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.952632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.952643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.952983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.952995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.953356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.953367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.953670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.953681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.954000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.954011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.954335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.954347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.954576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.954588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.954926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.954937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.955271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.955282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.955630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.955643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.955988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.955999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.956314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.956325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.956669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.956680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.956906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.956917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.957239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.957249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.957588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.957598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.958014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.958025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.958317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.958327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.958663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.958674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.959009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.959019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.959347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.959360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.959699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.959711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.960058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.960069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.960422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.960432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.960778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.960789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.961001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.961011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.961230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.961241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.961514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.961525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.961846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.961857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.962054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.962065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.962374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.962384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.962693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.962704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.962936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.962947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.963272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.963284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.963601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.963613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.963931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.963943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.964284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.964294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.964628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.964640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.898 [2024-07-15 14:16:32.964988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.898 [2024-07-15 14:16:32.964999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.898 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.965318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.965329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.965620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.965631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.965963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.965976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.966259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.966270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.966602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.966613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.966942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.966952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.967287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.967298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.967623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.967634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.967944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.967955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.968167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.968177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.968382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.968394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.968711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.968722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.969061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.969072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.969300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.969310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.969620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.969632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.969970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.969981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.970170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.970181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.970516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.970526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.970865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.970877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.971185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.971196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.971540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.971551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.971894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.971905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.972248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.972262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.972600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.972611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.972944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.972955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.973272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.973282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.973618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.973629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.973967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.973977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.974295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.974306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.974627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.974638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.974956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.974967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.975261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.975271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.975614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.975625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.975962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.975973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.976293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.976305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.976618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.976629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.976972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.976983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.977305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.977315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.977609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.977620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.977937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.977947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.978269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.978281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.978595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.978606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.978942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.978954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.979269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.979279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.979592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.979604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.979953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.979964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.980304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.980316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.980597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.980608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.980925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.981277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.981290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.981630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.981641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.981981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.981991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.982317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.982328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.982647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.982658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.982979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.982990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.983326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.983336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.983655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.983667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.983962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.983973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.984317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.984328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.984637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.984649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.984983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.984995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.985314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.985326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.985645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.985656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.985975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.985986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.986328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.986340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.986569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.986581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.986894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.986906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.987248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.899 [2024-07-15 14:16:32.987259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.899 qpair failed and we were unable to recover it. 00:30:34.899 [2024-07-15 14:16:32.987581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.987940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.987951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.988271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.988281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.988596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.988607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.988879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.988889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.989261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.989272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.989588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.989600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.989939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.989950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.990272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.990284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.990629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.990640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.990944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.990957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.991273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.991284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.991602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.991941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.992128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.992139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.992479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.992490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.992835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.992847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.993148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.993159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.993511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.993523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.993832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.993843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.994152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.994162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.994483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.994493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.994841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.994854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.995067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.995078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.995406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.995417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.995766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.995777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.996085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.996096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.996437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.996449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.996761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.996773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.997114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.997126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.997310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.997322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.997550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.997561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.998491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.998515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.998850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.998863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.999172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.999183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.999504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.999515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:32.999861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:32.999872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:33.000192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:33.000204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:33.000588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:33.000599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:34.900 [2024-07-15 14:16:33.000812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.900 [2024-07-15 14:16:33.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:34.900 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.001651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.001672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.002014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.002027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.002724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.002744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.003068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.003402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.003414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.172 [2024-07-15 14:16:33.003747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.172 [2024-07-15 14:16:33.003764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.172 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.004088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.004099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.004432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.004443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.004769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.004780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.004955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.004970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.005265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.005276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.005599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.005609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.005929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.005942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.006276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.006286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.006595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.006607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.006943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.006954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.007278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.007290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.007630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.007641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.007948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.007959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.008280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.008291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.008614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.008625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.008938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.008948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.009294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.009306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.009619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.009631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.009965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.009977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.010295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.010307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.010625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.010636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.010885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.010896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.011215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.011226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.011544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.011555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.011907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.011918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.012260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.012272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.012583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.012593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.012942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.012953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.013134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.013145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.013450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.013460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.013821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.013833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.173 [2024-07-15 14:16:33.014164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.173 [2024-07-15 14:16:33.014175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.173 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.014520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.014531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.014880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.014892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.015208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.015220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.015411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.015422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.015778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.015790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.016134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.016145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.016465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.016476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.016820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.016831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.017169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.017180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.017370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.017381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.017671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.017682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.018007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.018018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.018342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.018356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.018696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.018707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.019092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.019103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.019424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.019434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.019775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.019787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.020124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.020135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.020454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.020466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.020839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.020850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.021154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.021166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.021511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.021522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.021710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.021722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.021965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.021978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.022335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.022346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.022692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.022703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.023025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.023037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.023347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.023359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.023699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.023710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.024023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.024035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.024354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.024365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.174 qpair failed and we were unable to recover it. 00:30:35.174 [2024-07-15 14:16:33.024675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.174 [2024-07-15 14:16:33.024687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.025025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.025037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.025360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.025371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.025712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.025724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.025899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.025911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.026238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.026249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.026593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.026604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.026920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.026931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.027249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.027260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.027638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.027650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.028049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.028061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.028474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.028485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.028841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.028852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.029186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.029200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.029515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.029526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.029877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.029888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.030226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.030238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.030573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.030586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.030901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.030912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.031069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.031080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.031424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.031435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.031774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.031785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.032113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.032124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.032445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.032456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.032802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.032814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.033569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.033590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.033914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.033928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.034266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.034276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.034611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.034622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.038765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.038788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.039112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.039126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.039473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.039487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.039779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.175 [2024-07-15 14:16:33.039792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.175 qpair failed and we were unable to recover it. 00:30:35.175 [2024-07-15 14:16:33.040141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.040158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.040493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.040506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.040844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.040856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.041204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.041216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.041554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.041568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.041767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.041783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.042075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.042087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.042434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.042448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.042795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.042808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.043146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.043158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.043510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.043524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.043851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.044191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.044203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.044548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.044562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.044914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.044926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.045276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.045290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.045642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.045657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.045845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.045861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.046193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.046209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.046551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.046565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.046904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.046920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.047115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.047126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.047456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.047468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.047804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.047815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.048137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.048149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.048487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.048498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.048828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.048840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.049197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.049208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.049518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.049530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.049870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.049881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.050206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.050218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.050470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.050481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.050843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.050855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.176 [2024-07-15 14:16:33.051183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.176 [2024-07-15 14:16:33.051194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.176 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.051538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.051549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.051887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.051900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.052115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.052126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.052469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.052479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.052808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.052820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.053150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.053161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.053471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.053483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.053792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.053804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.054104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.054114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.054461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.054472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.054809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.054820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.055180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.055192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.055554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.055565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.055791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.055802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.056129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.056140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.056388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.056398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.056707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.056718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.057030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.057042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.057422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.057433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.057674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.057685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.057908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.057920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.058247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.058257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.058451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.058462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.058779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.058792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.058988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.059000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.059256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.059266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.059605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.059967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.059977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.060175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.060186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.060527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.060538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.060779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.177 [2024-07-15 14:16:33.060790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.177 qpair failed and we were unable to recover it. 00:30:35.177 [2024-07-15 14:16:33.061110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.061121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.061448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.061459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.061807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.061819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.062183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.062193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.062533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.062545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.062889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.062901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.063225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.063236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.063557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.063569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.063887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.063898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.064229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.064240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.064604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.064615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.064932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.064944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.065138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.065149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.065491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.065502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.065854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.065864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.066210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.066223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.066485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.066495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.066701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.066711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.067009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.067020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.067336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.067349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.067674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.067684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.068028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.068039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.068359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.068370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.068715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.068725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.069063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.069074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.069407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.178 [2024-07-15 14:16:33.069418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.178 qpair failed and we were unable to recover it. 00:30:35.178 [2024-07-15 14:16:33.069784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.069796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.070115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.070126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.070452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.070463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.070831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.070842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.071075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.071085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.071404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.071414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.071798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.071809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.072157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.072169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.072457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.072468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.072821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.072834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.073140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.073151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.073466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.073477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.073831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.073842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.074181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.074193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.074511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.074522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.074729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.074739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.075064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.075075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.075412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.075423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.075712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.075722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.076059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.076070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.076386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.076397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.076741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.076755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.077078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.077090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.077386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.077396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.077730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.077741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.078059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.078070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.078407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.078417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.078770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.078781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.079116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.079127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.079437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.079447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.079742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.079759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.179 qpair failed and we were unable to recover it. 00:30:35.179 [2024-07-15 14:16:33.080100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.179 [2024-07-15 14:16:33.080110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.080429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.080439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.080779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.080790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.081130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.081143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.081455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.081466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.081759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.081770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.082068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.082080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.082402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.082412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.082766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.082778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.083097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.083109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.083430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.083440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.083628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.083639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.083946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.083957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.084276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.084287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.084596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.084606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.085007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.085018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.085331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.085343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.085688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.085699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.086020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.086040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.086370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.086381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.086726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.086736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.087053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.087065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.087406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.087417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.087767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.087778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.088179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.088190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.088498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.088508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.088812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.088823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.089150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.089161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.089477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.089488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.089813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.089826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.090167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.090180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.090508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.180 [2024-07-15 14:16:33.090519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.180 qpair failed and we were unable to recover it. 00:30:35.180 [2024-07-15 14:16:33.090855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.090866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.091209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.091220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.091522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.091533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.091868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.091879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.092232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.092244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.092562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.092574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.092936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.092947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.093294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.093304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.093624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.093635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.094031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.094042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.094381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.094392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.094717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.094728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.095070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.095081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.095306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.095316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.095657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.095667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.095973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.095984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.096326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.096338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.096659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.096670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.097006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.097016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.097387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.097711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.097722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.098036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.098047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.098389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.098401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.098772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.098783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.099106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.099116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.099403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.099413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.099736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.099747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.100132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.100143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.100455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.100467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.100790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.100801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.101129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.101140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.101503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.101513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.181 qpair failed and we were unable to recover it. 00:30:35.181 [2024-07-15 14:16:33.101829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.181 [2024-07-15 14:16:33.101841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.102191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.102201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.102570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.102581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.102904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.102915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.103261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.103271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.103600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.103612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.104030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.104041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.104381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.104735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.104746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.105087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.105191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.105200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.105539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.105549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.105837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.105847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.106165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.106176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.106498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.106508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.106832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.106843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.107123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.107448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.107459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.107782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.107794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.108131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.108142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.108462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.108472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.108814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.108826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.109190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.109202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.109369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.109381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.109679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.109689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.109989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.110000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.110315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.110325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.110667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.110679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.111002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.111013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.111358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.111368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.111585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.111596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.111935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.182 [2024-07-15 14:16:33.111946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.182 qpair failed and we were unable to recover it. 00:30:35.182 [2024-07-15 14:16:33.112249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.112259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.112584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.112596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.112928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.112943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.113290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.113301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.113648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.113659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.113963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.113974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.114360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.114372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.114695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.114708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.115049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.115405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.115417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.115766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.115777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.116111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.116122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.116471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.116482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.116823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.116834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.117201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.117211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.117534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.117546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.117859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.118077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.118087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.118433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.118443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.118765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.118776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.119138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.119149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.119468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.119480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.119816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.119827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.120065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.120075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.120427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.120437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.183 [2024-07-15 14:16:33.120774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.183 [2024-07-15 14:16:33.120785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.183 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.121100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.121112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.121436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.121448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.121778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.121789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.122132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.122143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.122491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.122502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.122885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.122896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.123087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.123097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.123290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.123486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.123497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.123819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.123830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.124168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.124179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.124502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.124512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.124838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.124857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.125211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.125221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.125544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.125556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.125878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.125890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.126243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.126253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.126576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.126588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.126913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.126924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.127279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.127290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.127624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.127636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.128031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.128042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.128259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.128269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.128557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.128567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.128630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.128639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.128920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.128931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.129278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.129288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.129497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.129508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.129834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.129845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.130204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.130216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.130538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.130549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.130889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.130901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.184 qpair failed and we were unable to recover it. 00:30:35.184 [2024-07-15 14:16:33.131206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.184 [2024-07-15 14:16:33.131217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.131537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.131548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.131870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.131881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.132135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.132145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.132349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.132361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.132748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.132766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.133081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.133092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.133259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.133271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.133617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.133629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.133967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.133978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.134308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.134320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.134536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.134893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.134903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.135245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.135255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.135578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.135588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.135908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.135918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.136240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.136252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.136442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.136453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.136546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.136556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.136904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.136915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.137264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.137274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.137599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.137611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.137939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.137950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.138274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.138286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.138470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.138481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.138819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.138830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.139032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.139043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.139273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.139283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.139601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.139611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.139929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.139940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.140262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.140617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.185 [2024-07-15 14:16:33.140629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.185 qpair failed and we were unable to recover it. 00:30:35.185 [2024-07-15 14:16:33.141035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.141046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.141397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.141409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.141761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.141772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.142098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.142109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.142432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.142444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.142646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.142658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.143009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.143021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.143211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.143221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.143445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.143455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.143635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.143645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.143941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.143952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.144297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.144307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.144633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.144645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.144978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.144990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.145335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.145346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.145673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.145683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.146018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.146029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.146377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.146389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.146574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.146587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.146779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.146790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.147147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.147158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.147482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.147495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.147819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.147830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.148143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.148154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.148476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.148487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.148681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.148691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.148882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.148893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.149184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.149195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.149521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.149532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.149830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.149841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.150198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.150208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.150523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.186 [2024-07-15 14:16:33.150534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.186 qpair failed and we were unable to recover it. 00:30:35.186 [2024-07-15 14:16:33.150847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.150858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.151186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.151199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.151535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.151546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.151765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.151775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.152098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.152109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.152450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.152460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.152810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.152821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.153163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.153174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.153521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.153531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.153854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.153865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.154200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.154211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.154523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.154534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.154868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.154879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.155211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.155224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.155548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.155558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.155906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.155917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.156259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.156270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.156597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.156608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.156946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.156958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.157286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.157296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.157575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.157586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.157917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.157929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.158245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.158256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.158465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.158476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.158657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.158667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.158898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.158909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.159229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.159240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.159593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.159605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.159926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.159937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.160261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.160272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.160393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.160406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.160728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.160739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.161103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.187 [2024-07-15 14:16:33.161114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.187 qpair failed and we were unable to recover it. 00:30:35.187 [2024-07-15 14:16:33.161409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.161420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.161607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.161620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.161829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.161841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.162199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.162210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.162533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.162545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.162866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.162877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.163225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.163236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.163609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.163619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.163929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.163940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.164287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.164298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.164622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.164632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.164929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.164940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.165245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.165256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.165446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.165456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.165787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.165798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.166141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.166154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.166440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.166451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.166641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.166652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.166962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.166973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.167287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.167298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.167626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.167636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.167974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.167985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.168198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.168209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.168558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.168570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.168887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.168900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.169236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.169247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.169601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.169927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.169938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.170011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.170021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.170262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.170274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.170582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.170593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.170947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.170958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.171278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.188 [2024-07-15 14:16:33.171289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.188 qpair failed and we were unable to recover it. 00:30:35.188 [2024-07-15 14:16:33.171505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.171515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.171836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.171846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.172189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.172199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.172519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.172530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.172873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.172884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.173227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.173238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.173564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.173575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.173890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.173901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.174211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.174222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.174571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.174581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.174901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.174912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.175110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.175121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.175451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.175462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.175662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.175673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.176015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.176028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.176269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.176280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.176469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.176479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.176673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.176684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.177035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.177046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.177242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.177254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.177462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.177474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.177794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.177805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.177996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.178007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.178315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.178326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.178649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.189 [2024-07-15 14:16:33.178660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.189 qpair failed and we were unable to recover it. 00:30:35.189 [2024-07-15 14:16:33.178981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.178992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.179314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.179325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.179693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.179704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.180048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.180059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.180253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.180265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.180608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.180619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.180941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.181277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.181290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.181638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.181649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.182000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.182011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.182324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.182335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.182682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.182692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.183012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.183024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.183281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.183292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.183641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.183652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.183966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.183977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.184327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.184337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.184657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.184668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.184855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.184866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.185253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.185264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.185607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.185618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.185992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.186003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.186243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.186254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.186589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.186599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.186919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.186939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.187271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.187281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.187601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.187612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.187931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.187941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.188284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.188294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.188615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.188626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.188946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.188957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.189294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.189305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.190 qpair failed and we were unable to recover it. 00:30:35.190 [2024-07-15 14:16:33.189647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.190 [2024-07-15 14:16:33.189658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.189929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.189940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.190339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.190351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.190654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.190664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.191011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.191022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.191337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.191349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.191685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.191695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.192030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.192041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.192234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.192244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.192551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.192562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.192882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.192892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.193239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.193251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.193480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.193490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.193676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.193686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.194015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.194026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.194372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.194693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.194704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.194868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.194880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.195106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.195460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.195470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.195697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.195708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.196042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.196052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.196364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.196375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.196712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.196723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.197066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.197077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.197393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.197404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.197735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.197746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.198041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.198052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.198388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.198399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.198698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.198709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.199037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.199049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.199390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.199401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.199709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.199721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.191 [2024-07-15 14:16:33.200036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.191 [2024-07-15 14:16:33.200049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.191 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.200369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.200379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.200721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.200732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.200952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.200964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.201305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.201317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.201637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.201648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.201985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.201996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.202336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.202348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.202704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.202715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.203038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.203049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.203388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.203401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.203744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.203760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.203970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.203982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.204311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.204322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.204659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.204670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.205012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.205023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.205335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.205345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.205671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.205682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.206020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.206032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.206378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.206389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.206714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.206724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.207050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.207060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.207230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.207241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.207573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.207585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.207912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.207923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.208241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.208252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.208587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.208598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.208830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.208840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.209152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.209163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.209507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.209518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.209880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.209891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.210244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.192 [2024-07-15 14:16:33.210256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.192 qpair failed and we were unable to recover it. 00:30:35.192 [2024-07-15 14:16:33.210605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.210616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.210933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.210944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.211303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.211313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.211631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.211642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.211873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.211883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.212226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.212238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.212579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.212590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.212784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.212794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.213172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.213182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.213507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.213518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.213863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.213874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.214224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.214236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.214546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.214874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.214886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.215170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.215181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.215516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.215527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.215852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.215863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.216053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.216064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.216354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.216365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.216719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.216730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.217064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.217075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.217412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.217422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.217766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.217776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.218109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.218120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.218443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.218453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.218772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.218784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.219178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.219189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.219531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.219542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.219869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.219879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.220175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.220185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.220372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.220382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.220583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.193 [2024-07-15 14:16:33.220595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.193 qpair failed and we were unable to recover it. 00:30:35.193 [2024-07-15 14:16:33.220867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.220878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.221215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.221225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.221562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.221572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.221917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.221928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.222256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.222267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.222590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.222933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.222945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.223137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.223148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.223524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.223535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.223856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.223868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.224209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.224219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.224561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.224573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.224822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.224832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.225166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.225177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.225341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.225353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.225683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.225693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.226018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.226028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.226393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.226403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.226746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.226772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.227102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.227115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.227338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.227350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.227642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.227653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.227981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.227992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.228344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.228355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.228729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.194 [2024-07-15 14:16:33.228740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.194 qpair failed and we were unable to recover it. 00:30:35.194 [2024-07-15 14:16:33.229061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.229072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.229388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.229400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.229746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.229762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.230109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.230119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.230444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.230455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.230761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.230772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.231090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.231100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.231463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.231473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.231785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.231797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.232124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.232135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.232480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.232490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.232809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.232821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.233138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.233150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.233491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.233502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.233854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.233866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.234159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.234170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.234482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.234493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.234832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.234843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.235195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.235205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.235519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.235530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.235875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.235886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.236235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.236245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.236549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.236560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.236895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.236906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.237224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.237235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.237578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.237589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.237823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.237834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.238145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.238488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.238499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.238797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.238808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.238988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.238999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.239344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.239355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.195 [2024-07-15 14:16:33.239679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.195 [2024-07-15 14:16:33.239690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.195 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.240019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.240029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.240373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.240384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.240729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.240739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.241226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.241238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.241592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.241603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.241917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.241928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.242231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.242241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.242560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.242572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.242906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.242917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.243261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.243271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.243593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.243605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.243827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.243838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.244184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.244196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.244553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.244564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.244890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.244902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.245227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.245238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.245580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.245592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.245937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.245949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.246274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.246284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.246607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.246617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.247033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.247044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.247322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.247332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.247658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.247669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.248316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.248329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.248674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.248685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.249075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.249086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.249396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.249407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.249750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.249765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.250099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.250110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.250437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.250447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.196 [2024-07-15 14:16:33.250819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.196 [2024-07-15 14:16:33.250830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.196 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.251164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.251176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.251521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.251532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.251828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.252160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.252171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.252516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.252527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.252904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.252915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.253229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.253239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.253428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.253439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.253728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.254048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.254059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.254397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.254409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.254729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.254742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.254976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.254986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.255302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.255313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.255637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.255648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.255946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.255956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.256290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.256301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.256530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.256540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.256866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.256877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.257201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.257213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.257555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.257566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.257839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.257850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.258191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.258203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.258511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.258522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.258832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.258844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.259176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.259187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.259579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.259589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.259904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.259916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.260256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.260267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.260610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.260623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.260970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.260981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.261304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.261315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.197 qpair failed and we were unable to recover it. 00:30:35.197 [2024-07-15 14:16:33.261655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.197 [2024-07-15 14:16:33.261666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.261993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.262004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.262322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.262333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.262696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.262708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.262892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.262905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.263238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.263248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.263577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.263587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.263983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.263994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.264333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.264345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.264689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.264700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.264920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.264931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.265128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.265138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.265424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.265435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.265782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.265793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.266110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.266121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.266313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.266323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.266601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.266611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.266950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.266962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.267300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.267311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.267646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.267656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.267998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.268009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.268365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.268685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.268696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.269015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.269026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.269363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.269373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.269713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.269724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.270052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.270062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.270274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.270284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.270620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.270635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.270919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.270930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.198 [2024-07-15 14:16:33.271272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.198 [2024-07-15 14:16:33.271284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.198 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.271607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.271617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.271933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.271945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.272298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.272308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.272687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.272698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.273017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.273028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.273405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.273415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.273758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.273769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.274065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.274075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.274390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.274402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.274715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.274726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.199 [2024-07-15 14:16:33.275065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.199 [2024-07-15 14:16:33.275076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.199 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.275376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.275388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.275706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.275717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.276055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.276067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.276414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.276425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.276746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.276760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.276937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.276946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.277155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.277165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.277385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.277395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.277726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.277737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.278058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.278069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.278408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.278418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.278769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.278780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.279098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.279109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.279430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.279441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.279726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.279737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.280030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.280040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.467 [2024-07-15 14:16:33.280416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.467 [2024-07-15 14:16:33.280427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.467 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.280738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.280749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.281091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.281102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.281442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.281453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.281776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.281787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.282126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.282136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.282455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.282467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.282804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.282815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.283019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.283030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.283328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.283339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.283679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.283691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.284002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.284015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.284201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.284213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.284491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.284502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.284844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.284855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.285233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.285244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.285590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.285601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.285915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.285927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.286230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.286240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.286539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.286550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.286881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.286892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.287221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.287232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.287567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.287578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.287898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.288286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.288297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.288612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.288624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.288934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.288945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.289241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.289251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.289441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.289453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.289731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.289742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.289958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.290299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.290311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.290630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.290641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.290938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.290949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.291297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.291309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.291651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.291663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.291906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.291917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.292254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.292265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.292604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.292618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.292914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.293234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.293246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.293512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.293523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.293860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.293871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.294198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.294210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.294568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.294579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.294943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.294955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.295139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.295150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.295484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.295494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.295823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.295835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.296160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.296171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.296463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.296473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.296813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.296824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.297165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.297176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.297492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.297504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.297884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.297896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.298205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.298217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.298529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.298539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.298861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.298872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.299220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.299231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.299586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.299597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.299927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.299939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.300284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.300295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.300642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.300654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.300977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.301312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.301323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.301652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.301663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.302004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.302016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.302367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.468 [2024-07-15 14:16:33.302379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.468 qpair failed and we were unable to recover it. 00:30:35.468 [2024-07-15 14:16:33.302699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.302710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.303049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.303061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.303409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.303419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.303710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.303722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.304053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.304065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.304388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.304399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.304709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.304720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.305043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.305054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.305377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.305388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.305712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.305723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.306028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.306040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.306226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.306239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.306616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.306627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.306889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.306899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.307250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.307260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.307609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.307621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.308009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.308020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.308341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.308352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.308690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.308702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.309017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.309028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.309353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.309364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.309684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.309696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.310030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.310042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.310353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.310364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.310678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.310689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.311022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.311035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.311369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.311380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.311727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.311738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.312065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.312077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.312433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.312444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.312786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.312797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.313124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.313135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.313460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.313471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.313791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.313802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.314119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.314130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.314471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.314482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.314804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.314815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.315157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.315167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.315509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.315523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.315858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.315869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.316100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.316110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.316397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.316408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.316744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.316762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.317092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.317103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.317447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.317458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.317779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.317790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.318130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.318140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.318441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.318453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.318776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.318787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.319092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.319104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.319385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.319395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.319736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.319746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.320074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.320086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.320452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.320463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.320816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.320827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.320997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.321007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.321232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.321242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.321606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.321617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.321803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.321814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.322049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.322059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.322379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.322390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.322688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.322699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.323019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.323030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.323371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.323750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.323765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.324084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.469 [2024-07-15 14:16:33.324094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.469 qpair failed and we were unable to recover it. 00:30:35.469 [2024-07-15 14:16:33.324432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.324444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.324788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.324799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.325120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.325131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.325469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.325479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.325819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.325831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.326174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.326184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.326509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.326520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.326736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.327078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.327088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.327397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.327409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.327759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.327770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.328104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.328115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.328450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.328462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.328778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.328792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.329111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.329121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.329441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.329451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.329795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.329807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.330125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.330136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.330475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.330487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.330809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.330820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.331141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.331152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.331380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.331390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.331759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.331769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.332093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.332104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.332477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.332488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.332809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.332821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.333127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.333139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.333458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.333469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.333802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.333813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.333886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.333895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.334171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.334181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.334522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.334533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.334852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.334865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.335206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.335217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.335564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.335574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.335900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.335911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.336288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.336298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.336485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.336496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.336819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.337146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.337158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.337477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.337490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.337828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.337839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.338179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.338190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.338516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.338527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.338851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.338862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.339246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.339257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.339567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.339578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.339873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.339884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.340230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.340241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.340431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.340442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.340745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.340760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.340955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.340965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.341298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.341309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.341651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.341979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.341991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.342312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.342324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.342518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.342529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.342872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.342883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.343189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.343200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.470 [2024-07-15 14:16:33.343387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.470 [2024-07-15 14:16:33.343397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.470 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.343719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.343730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.344070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.344080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.344432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.344443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.344758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.344770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.345081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.345092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.345402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.345414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.345760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.345771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.346125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.346135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.346455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.346466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.346808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.346820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.347171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.347182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.347528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.347539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.347859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.347870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.348251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.348262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.348602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.348613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.348958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.348970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.349296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.349307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.349647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.349659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.349849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.349861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.350062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.350075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.350295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.350307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.350614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.350627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.350978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.350989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.351302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.351313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.351650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.351662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.351984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.351996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.352339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.352351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.352674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.352685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.353007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.353018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.353358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.353370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.353712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.354057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.354069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.354398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.354410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.354742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.354762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.354953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.354965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.355261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.355273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.355595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.355606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.355935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.355946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.356248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.356259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.356585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.356595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.356915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.356925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.357120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.357130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.357302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.357312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.357642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.357653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.357957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.357967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.358314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.358325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.358672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.358683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.359011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.359021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.359250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.359261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.359567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.359577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.359780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.359791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.360102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.360112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.360432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.360442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.360629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.360640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.360980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.360991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.361308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.361320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.361662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.361673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.361856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.361868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.362180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.362192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.362510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.362521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.362714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.362724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.363050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.363060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.363371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.363383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.363705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.363717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.364062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.364073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.364410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.364421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.364739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.471 [2024-07-15 14:16:33.364750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.471 qpair failed and we were unable to recover it. 00:30:35.471 [2024-07-15 14:16:33.364976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.364987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.365352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.365704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.365715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.366055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.366067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.366317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.366329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.366556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.366568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.366762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.366776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.367157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.367168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.367489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.367500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.367702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.367711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.367999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.368009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.368355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.368366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.368688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.368700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.369031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.369042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.369236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.369247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.369559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.369570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.369940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.369951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.370268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.370278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.370620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.370631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.370971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.370982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.371329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.371340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.371662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.371673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.371991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.372004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.372332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.372344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.372592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.372604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.372926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.372937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.373263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.373274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.373595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.373605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.373922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.373932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.374253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.374266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.374601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.374612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.374839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.374850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.375152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.375162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.375418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.375429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.375768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.375779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.375985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.375995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.376402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.376413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.376708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.376719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.376912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.376922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.377257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.377269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.377605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.377617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.377676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.377686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.377963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.377973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.378313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.378323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.378658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.378669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.378872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.378883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.379258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.379268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.379621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.379632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.379842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.379853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.380145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.380155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.380470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.380482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.380823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.380834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.381153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.381164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.381379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.381389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.381571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.381582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.381913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.381924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.382158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.382170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.382502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.382515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.382843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.382855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.383064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.383075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.383321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.383331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.383536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.383546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.383832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.383843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.384171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.384184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.384529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.384540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.472 [2024-07-15 14:16:33.384727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.472 [2024-07-15 14:16:33.384738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.472 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.385045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.385055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.385411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.385422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.385763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.385774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.386018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.386029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.386305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.386315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.386624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.386635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.386939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.386951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.387288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.387299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.387663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.387673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.388011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.388024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.388211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.388221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.388545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.388557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.388880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.388891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.389273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.389283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.389634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.389645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.389995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.390005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.390307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.390319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.390661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.390673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.390882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.390893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.391135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.391146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.391482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.391493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.391832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.391843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.392163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.392174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.392576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.392586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.392904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.392918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.393188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.393198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.393534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.393545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.393875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.393885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.394227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.394237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.394464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.394475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.394823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.394833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.395188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.395199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.395520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.395530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.395837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.395856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.396181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.396192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.396385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.396395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.396622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.396633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.396878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.396890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.397196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.397206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.397413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.397631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.397641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.397943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.397953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.398305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.398315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.398635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.398646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.399014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.399025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.399334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.399345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.399650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.399661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.399987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.399998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.400327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.400337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.400675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.400686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.401043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.401054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.401369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.401380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.401699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.401710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.402002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.402353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.402365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.402590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.402601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.402873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.402883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.403074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.403084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.403391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.403401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.403723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.404071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.404082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.404385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.404397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.404696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.404706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.405021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.473 [2024-07-15 14:16:33.405032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.473 qpair failed and we were unable to recover it. 00:30:35.473 [2024-07-15 14:16:33.405357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.405368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.405671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.405684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.406005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.406016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.406326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.406337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.406655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.406666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.407008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.407019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.407326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.407337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.407539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.407550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.407737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.407749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.408058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.408068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.408363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.408374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.408697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.408707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.409033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.409044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.409425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.409436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.409759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.409771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.410076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.410087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.410407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.410418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.410758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.410769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.411088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.411099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.411425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.411435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.411766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.411778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.412134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.412145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.412487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.412497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.412827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.412838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.413156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.413167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.413533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.413545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.413850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.413861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.414218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.414229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.414513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.414526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.414830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.415197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.415208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.415528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.415539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.415863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.415874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.416174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.416186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.416519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.416530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.416867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.416878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.417197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.417207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.417550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.417561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.417880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.417892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.418077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.418087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.418427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.418439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.418776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.418787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.419096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.419107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.419427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.419438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.419764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.419775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.420072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.420083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.420427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.420438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.420837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.420848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.421181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.421193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.421531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.421542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.421872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.421884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.422183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.422193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.422588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.422600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.422782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.422792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.423020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.423030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.423369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.423379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.423701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.423712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.424055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.424066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.424306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.424317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.424644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.424654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.424981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.424993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.425328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.425338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.425679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.425689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.426009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.426020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.426359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.426370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.426707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.426718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.427020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.474 [2024-07-15 14:16:33.427031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.474 qpair failed and we were unable to recover it. 00:30:35.474 [2024-07-15 14:16:33.427394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.427405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.427724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.427734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.428059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.428072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.428422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.428433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.428621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.428632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.428920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.428931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.429254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.429265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.429608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.429619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.429962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.429973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.430293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.430304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.430638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.430648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.430974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.430986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.431291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.431302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.431620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.431631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.431933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.431945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.432284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.432295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.432631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.432641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.432872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.432884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.433245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.433255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.433470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.433479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.433766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.433777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.434059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.434070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.434450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.434460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.434770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.434783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.435120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.435130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.435414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.435425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.435730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.435740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.436116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.436127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.436449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.436459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.436782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.436795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.437144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.437154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.437497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.437509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.437853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.437864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.438201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.438212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.438557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.438568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.438917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.438928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.439174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.439184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.439504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.439514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.475 [2024-07-15 14:16:33.439849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.475 [2024-07-15 14:16:33.439860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.475 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.440205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.440215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.440539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.440550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.440873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.440885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.441219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.441229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.441572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.441583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.441906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.441916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.442246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.442257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.442441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.442452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.442759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.442771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.443084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.443096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.443459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.443469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.443655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.443666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.443995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.444006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.444325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.444335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.444525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.444535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.444827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.444838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.445147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.445158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.445481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.445491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.445819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.445831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.446173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.446184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.446501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.446512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.446848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.446859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.447186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.447536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.447546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.447892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.447904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.448280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.448291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.448605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.448616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.448807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.448818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.449128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.449139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.449456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.449467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.449795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.449805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.450123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.450135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.450479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.450490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.450814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.476 [2024-07-15 14:16:33.450825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.476 qpair failed and we were unable to recover it. 00:30:35.476 [2024-07-15 14:16:33.451003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.451014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.451349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.451360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.451709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.451720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.452031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.452042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.452378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.452389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.452728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.452740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.452935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.452946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.453245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.453257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.453578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.453590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.453925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.453935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.454283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.454605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.454617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.454934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.454946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.455252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.455263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.455665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.455675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.455985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.455997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.456191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.456201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.456489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.456499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.456835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.456846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.457040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.457050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.457271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.457283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.457604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.457615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.457928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.457940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.458280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.458290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.458613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.458623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.458935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.458945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.459246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.459257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.459582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.459593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.459950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.459962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.460267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.460277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.460620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.460631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.460968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.460980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.461298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.461309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.461374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.461385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.461680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.461690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.462024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.462035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.462378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.462390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.462734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.462745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.463097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.463421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.463433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.463759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.477 [2024-07-15 14:16:33.463771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.477 qpair failed and we were unable to recover it. 00:30:35.477 [2024-07-15 14:16:33.464083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.464093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.464282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.464292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.464525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.464536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.464842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.464853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.465184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.465196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.465391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.465402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.465730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.465740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.466062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.466073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.466409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.466419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.466474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.466483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.466789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.466799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.467128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.467139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.467364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.467374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.467651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.467662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.467890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.467901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.468250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.468260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.468574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.468585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.468926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.468937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.469280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.469291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.469614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.469625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.469943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.469954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.470270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.470282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.470456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.470468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.470783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.470794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.471073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.471085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.471293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.471304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.471631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.471642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.471983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.471994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.472321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.472332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.472668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.472678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.473029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.473040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.473357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.473368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.473690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.473703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.474029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.474040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.474384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.474723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.475061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.475072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.475411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.475421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.475768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.475779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.476118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.476129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.476441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.478 [2024-07-15 14:16:33.476452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.478 qpair failed and we were unable to recover it. 00:30:35.478 [2024-07-15 14:16:33.476790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.476801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.477127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.477138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.477454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.477466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.477788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.477799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.478128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.478140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.478481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.478492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.478817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.478828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.479026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.479037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.479363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.479374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.479679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.479690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.480021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.480032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.480357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.480370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.480703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.480713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.481025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.481037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.481402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.481413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.481732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.481742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.482054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.482065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.482255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.482266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.482587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.482599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.482918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.482929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.483271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.483282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.483626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.483636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.483861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.483872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.484201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.484212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.484551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.484563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.484878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.484889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.485118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.485129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.485448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.485458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.485778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.485790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.486150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.486161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.486475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.486487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.486810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.486822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.487105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.487115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.487468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.487478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.487802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.487814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.488095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.488105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.488450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.488461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.488811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.488823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.489168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.489178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.489498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.489510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.489692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.479 [2024-07-15 14:16:33.489703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.479 qpair failed and we were unable to recover it. 00:30:35.479 [2024-07-15 14:16:33.489896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.489907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.490119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.490130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.490323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.490334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.490657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.490669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.490873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.490884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.491233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.491245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.491558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.491569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.491894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.491905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.492247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.492258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.492580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.492591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.492913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.492926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.493269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.493280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.493628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.493640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.493974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.493984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.494320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.494331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.494512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.494522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.494764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.494774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.495091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.495101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.495423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.495434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.495759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.495771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.496112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.496123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.496350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.496360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.496650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.496661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.497003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.497015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.497361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.497372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.497713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.497724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.498033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.498044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.498266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.498276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.498621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.498633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.498966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.498977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.499297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.499308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.499650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.499660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.499989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.500000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.500321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.500332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.500523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.500534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.500835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.500846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.480 [2024-07-15 14:16:33.501175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.480 [2024-07-15 14:16:33.501186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.480 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.501516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.501527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.501908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.501919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.502206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.502217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.502406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.502416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.502729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.502741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.503062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.503073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.503414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.503425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.503610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.503621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.503961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.503973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.504293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.504303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.504647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.504658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.504859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.504870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.505205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.505216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.505536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.505547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.505879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.505893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.506194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.506205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.506561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.506571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.506902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.506913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.507248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.507258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.507610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.507620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.507920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.508267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.508277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.508597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.508608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.508983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.508993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.509306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.509317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.509658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.509668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.510016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.510027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.510368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.510379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.510779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.511089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.511104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.511426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.511437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.511745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.511759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.512044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.512054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.512367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.512378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.512714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.512724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.513069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.513080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.513399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.513410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.513748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.513762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.514062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.514073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.514366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.514704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.481 [2024-07-15 14:16:33.514716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.481 qpair failed and we were unable to recover it. 00:30:35.481 [2024-07-15 14:16:33.515037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.515050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.515377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.515389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.515731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.515742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.516050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.516062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.516396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.516408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.516595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.516607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.516924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.516935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.517134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.517144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.517444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.517454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.517801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.517812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.518146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.518156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.518494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.518504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.518859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.518869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.519212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.519224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.519569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.519580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.519922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.519932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.520250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.520260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.520482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.520492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.520812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.520822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.521147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.521158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.521442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.521452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.521794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.521804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.522107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.522119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.522481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.522492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.522761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.522772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.523087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.523098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.523441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.523451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.523641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.523651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.523945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.523957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.524124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.524135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.524345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.524356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.524568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.524579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.524909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.524920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.525154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.525164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.525504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.525514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.525789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.525800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.526135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.526146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.526456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.526468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.526808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.526819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.527160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.527170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.527467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.527477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.482 qpair failed and we were unable to recover it. 00:30:35.482 [2024-07-15 14:16:33.527798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.482 [2024-07-15 14:16:33.527811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.528109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.528119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.528410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.528420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.528743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.528766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.529073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.529083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.529438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.529663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.529673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.529996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.530007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.530206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.530216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.530540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.530948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.530959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.531272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.531283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.531632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.531642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.531972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.531983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.532294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.532305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.532492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.532502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.532848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.532859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.533201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.533211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.533403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.533413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.533659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.533670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.533869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.533881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.534223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.534235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.534426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.534437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.534747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.534765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.534973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.534984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.535298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.535309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.535680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.535690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.536012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.536026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.536360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.536371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.536714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.536724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.537053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.537065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.537383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.537394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.537730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.537741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.537976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.537987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.538307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.538319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.538639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.538650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.538999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.539357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.539369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.539700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.539712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.540029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.540041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.540375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.540386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.483 [2024-07-15 14:16:33.540734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.483 [2024-07-15 14:16:33.540745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.483 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.541062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.541073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.541427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.541438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.541629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.541641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.541863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.541873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.542098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.542108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.542286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.542297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.542577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.542587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.542906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.542918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.543198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.543208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.543485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.543495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.543835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.543846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.544042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.544053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.544390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.544400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.544715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.544726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.545068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.545078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.545427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.545437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.545759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.545770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.546091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.546101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.546288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.546299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.546621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.546631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.546981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.546992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.547315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.547325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.547663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.547674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.547946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.547956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.548273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.548284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.548603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.548614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.548954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.548966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.549282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.549293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.549613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.549624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.549920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.549931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.550267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.550278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.550505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.550515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.550841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.550852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.551177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.551187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.551525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.551536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.484 [2024-07-15 14:16:33.551850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.484 [2024-07-15 14:16:33.551861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.484 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.552191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.552201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.552605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.552615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.552960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.552971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.553314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.553325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.553677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.553688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.554021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.554032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.554377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.554388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.554722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.554733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.555055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.555066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.555385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.555398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.555711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.555724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.556067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.556273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.556284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.556572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.556583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.556915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.556926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.557271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.557282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.557605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.557615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.557873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.557886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.558249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.558259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.558585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.558596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.558930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.559134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.559145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.559472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.559482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.559825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.559836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.560172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.560183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.560504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.560516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.560851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.560861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.561208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.561219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.561542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.561554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.561873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.561884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.562216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.562228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.562577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.562587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.562931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.562941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.563265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.563276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.563612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.563623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.563957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.563968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.564165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.564176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.564515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.564525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.564871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.564882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.565189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.565199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.565563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.485 [2024-07-15 14:16:33.565574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.485 qpair failed and we were unable to recover it. 00:30:35.485 [2024-07-15 14:16:33.565887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.565899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.566243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.566254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.566604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.566615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.566936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.566948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.567291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.567301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.567638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.567649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.567848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.567859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.568141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.568151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.568473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.568483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.568821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.568832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.569184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.569195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.569513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.569525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.569870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.569881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.570089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.570099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.570391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.570402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.570726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.570738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.571060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.571071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.571377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.571392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.571744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.571759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.572067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.572077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.572396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.572407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.572592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.572604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.572871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.572881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.486 [2024-07-15 14:16:33.573164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.486 [2024-07-15 14:16:33.573174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.486 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.573497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.573508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.573831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.573843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.574218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.574229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.574552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.574562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.574938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.574949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.575287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.575298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.575640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.575650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.575879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.575889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.576209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.576219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.576558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.576570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.576917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.576928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.577273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.577284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.577607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.577618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.577934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.577944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.578258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.578269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.578593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.579004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.579015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.579350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.579362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.579704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.579715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.770 qpair failed and we were unable to recover it. 00:30:35.770 [2024-07-15 14:16:33.580027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.770 [2024-07-15 14:16:33.580039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.580358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.580369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.580710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.580721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.581008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.581018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.581338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.581348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.581677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.581688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.581909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.581920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.582238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.582249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.582409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.582419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.582743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.582760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.583061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.583071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.583416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.583426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.583747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.583761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.584104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.584115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.584452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.584463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.584814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.584825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.585163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.585389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.585399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.585704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.585714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.586054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.586065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.586402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.586413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.586758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.586770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.587107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.587118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.587348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.587358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.587625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.587635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.587996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.588008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.588300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.588311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.588652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.588663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.588981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.588993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.589332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.589342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.589684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.589694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.590009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.590020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.590344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.590354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.590702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.591048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.591058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.591405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.591416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.591743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.591758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.592096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.592107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.592443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.592453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.592804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.592816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.593037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.593048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.771 [2024-07-15 14:16:33.593388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.771 [2024-07-15 14:16:33.593399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.771 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.593736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.593749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.594089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.594100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.594465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.594476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.594663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.594674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.594963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.594974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.595274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.595285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.595642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.595653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.595879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.595891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.596311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.596323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.596663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.596675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.597010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.597021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.597334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.597345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.597682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.597693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.597886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.597896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.598188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.598198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.598516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.598527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.598840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.598851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.599201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.599212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.599532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.599544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.599869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.599880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.600217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.600228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.600569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.600581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.600902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.600913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.601214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.601226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.601410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.601421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.601731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.601743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.602054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.602064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.602371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.602383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.602682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.602693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.603043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.603055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.603377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.603388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.603756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.603767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.604089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.604100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.604451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.604463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.604784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.604797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.605138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.605149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.605516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.605526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.605878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.605890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.606232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.606556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.606567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.606894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.772 [2024-07-15 14:16:33.606905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.772 qpair failed and we were unable to recover it. 00:30:35.772 [2024-07-15 14:16:33.607240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.607253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.607576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.607586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.607795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.607806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.608146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.608157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.608293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.608303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.608634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.608645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.608978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.609350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.609362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.609714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.609725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.610075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.610086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.610414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.610425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.610769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.610780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.611131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.611142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.611347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.611357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.611649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.611659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.611981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.611992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.612321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.612332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.612662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.612673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.613012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.613024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.613356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.613366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.613722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.613732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.614056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.614068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.614389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.614399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.614617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.614627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.615002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.615013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.615337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.615347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.615636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.615646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.615984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.615996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.616191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.616201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.616524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.616535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.616876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.616887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.617207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.617219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.617519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.617530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.617855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.617866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.618201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.618211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.618552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.618563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.618910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.618921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.619265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.619275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.619538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.619549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.619842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.773 [2024-07-15 14:16:33.619853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.773 qpair failed and we were unable to recover it. 00:30:35.773 [2024-07-15 14:16:33.620206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.620216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.620410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.620420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.620762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.620773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.620978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.620988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.621309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.621321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.621663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.621674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.621994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.622005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.622351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.622363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.622676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.622687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.623014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.623026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.623245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.623257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.623587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.623599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.623941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.623953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.624276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.624286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.624609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.624943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.624954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.625351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.625362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.625624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.625635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.625932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.625943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.626265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.626276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.626613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.626623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.626950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.626961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.627147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.627158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.627450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.627460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.627775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.627787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.628130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.628141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.628459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.628471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.628815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.628826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.629139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.629152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.629476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.629486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.629825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.629835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.630059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.630070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.630409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.630420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.630608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.774 [2024-07-15 14:16:33.630620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.774 qpair failed and we were unable to recover it. 00:30:35.774 [2024-07-15 14:16:33.630933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.630944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.631257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.631269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.631622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.631633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.631936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.631947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.632354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.632364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.632673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.632684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.633015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.633025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.633343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.633354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.633681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.634028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.634040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.634353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.634364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.634691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.634702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.635024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.635034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.635350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.635361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.635556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.635565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.635891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.635901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.636272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.636283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.636493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.636504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.636590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.636601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.636788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.636799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.637124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.637135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.637532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.637547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.637889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.637901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.638232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.638243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.638567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.638578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.638870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.638880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.639184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.639195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.639541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.639552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.639883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.639894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.640225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.640237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.640572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.640584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.640919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.640931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.641246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.641257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.641575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.641586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.641915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.641927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.642251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.642263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.642588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.642599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.642930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.642940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.643255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.643267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.643554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.643565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.643919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.775 [2024-07-15 14:16:33.643929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.775 qpair failed and we were unable to recover it. 00:30:35.775 [2024-07-15 14:16:33.644207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.644217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.644545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.644555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.644864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.644875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.645063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.645074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.645380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.645392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.645573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.645584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.645900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.645911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.646238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.646248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.646479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.646490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.646829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.646840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.647196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.647207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.647397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.647407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.647739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.647750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.648064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.648075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.648469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.648480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.648808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.648819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.649022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.649033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.649359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.649370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.649717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.649727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.650040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.650051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.650240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.650251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.650577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.650591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.650933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.650944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.651239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.651249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.651554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.651564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.651902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.651914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.652145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.652155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.652367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.652377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.652555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.652566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.652912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.652923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.653127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.653137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.653420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.653431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.653766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.653778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.653963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.653974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.654307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.654318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.654510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.654520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.654879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.654890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.655101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.655111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.655456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.655467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.655787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.655806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 [2024-07-15 14:16:33.656005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.776 [2024-07-15 14:16:33.656016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.776 qpair failed and we were unable to recover it. 00:30:35.776 Read completed with error (sct=0, sc=8) 00:30:35.776 starting I/O failed 00:30:35.776 Read completed with error (sct=0, sc=8) 00:30:35.776 starting I/O failed 00:30:35.776 Write completed with error (sct=0, sc=8) 00:30:35.776 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Read completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 Write completed with error (sct=0, sc=8) 00:30:35.777 starting I/O failed 00:30:35.777 [2024-07-15 14:16:33.656730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.777 [2024-07-15 14:16:33.657065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.657108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.657505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.657536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.657893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.657904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.658254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.658264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.658576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.658587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.658911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.658922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.659199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.659209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.659551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.659562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.659904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.659915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.660257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.660267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.660458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.660468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.660813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.660825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.661142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.661154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.661471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.661482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.661834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.661846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.662181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.662191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.662536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.662548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.662875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.662886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.663196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.663207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.663552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.663563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.663893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.663904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.664234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.664245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.664433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.664444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.664790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.664801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.665002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.665012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.665184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.665194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.665377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.665389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.665684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.665694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.666006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.666016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.666336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.666347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.777 [2024-07-15 14:16:33.666689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.777 [2024-07-15 14:16:33.666701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.777 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.667041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.667052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.667374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.667385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.667707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.667719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.667904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.667916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.668222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.668234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.668428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.668441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.668771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.668782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.669086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.669096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.669404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.669415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.669768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.669779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.670004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.670016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.670361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.670371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.670716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.670726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.671127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.671138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.671344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.671355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.671557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.671568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.671868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.671879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.672199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.672211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.672400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.672410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.672709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.672721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.672901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.672912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.673247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.673258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.673579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.673590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.673926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.673938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.674275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.674285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.674632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.674643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.674982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.674994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.675342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.675353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.675691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.675702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.676029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.676041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.676386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.676396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.676587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.676598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.676926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.676937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.677232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.677242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.677559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.677569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.677892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.678240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.678250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.678433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.678444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.678632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.678643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.679010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.679021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.679208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.778 [2024-07-15 14:16:33.679219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.778 qpair failed and we were unable to recover it. 00:30:35.778 [2024-07-15 14:16:33.679538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.679549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.679883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.679893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.680223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.680235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.680586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.680597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.680919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.680930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.681237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.681248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.681567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.681579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.681913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.681925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.682350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.682361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.682666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.682677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.683026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.683037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.683224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.683234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.683548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.683559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.683904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.683914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.684247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.684257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.684599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.684609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.684847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.684858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.685091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.685101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.685442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.685452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.685798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.685809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.685987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.685997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.686296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.686306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.686500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.686511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.686714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.686726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.687055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.687067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.687391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.687402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.687600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.687611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.687811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.687822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.688173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.688183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.688372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.688384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.688704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.688715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.689037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.689049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.689399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.689410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.689612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.689622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.689941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.689953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.690294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.690305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.779 [2024-07-15 14:16:33.690624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.779 [2024-07-15 14:16:33.690635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.779 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.690975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.690988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.691305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.691317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.691654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.691665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.691985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.691996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.692318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.692328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.692658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.692669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.693008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.693019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.693373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.693385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.693684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.693695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.694021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.694032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.694368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.694380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.694722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.694732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.694928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.694940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.695145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.695156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.695488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.695499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.695834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.695844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.696195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.696206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.696499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.696511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.696676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.696686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.696862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.696873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.697159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.697170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.697491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.697503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.697801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.697812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.698136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.698148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.698467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.698477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.698803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.698815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.699136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.699146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.699492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.699502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.699829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.699841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.700199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.700209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.700374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.700385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.700691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.700701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.701001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.701012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.701333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.701343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.701675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.701686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.701998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.702009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.702332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.702342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.702668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.702679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.703028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.703039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.703383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.703394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.703725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.780 [2024-07-15 14:16:33.703736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.780 qpair failed and we were unable to recover it. 00:30:35.780 [2024-07-15 14:16:33.704059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.704074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.704415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.704428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.704608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.704619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.704814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.705035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.705045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.705384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.705394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.705599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.705609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.705911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.705922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.706256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.706267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.706612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.706624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.706960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.706972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.707151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.707163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.707486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.707496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.707800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.707812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.708031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.708042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.708321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.708331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.708515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.708526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.708825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.708835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.709149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.709160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.709350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.709361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.709680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.709691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.710020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.710031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.710375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.710387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.710707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.710717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.710915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.710925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.711257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.711267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.711609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.711621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.711845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.711858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.712203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.712214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.712566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.712576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.712886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.712898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.713220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.713231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.713557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.713569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.713864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.713875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.714220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.714232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.714554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.714564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.714886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.714898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.715234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.715244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.715586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.715598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.715918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.715929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.716238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.781 [2024-07-15 14:16:33.716250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.781 qpair failed and we were unable to recover it. 00:30:35.781 [2024-07-15 14:16:33.716588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.716599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.716950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.716961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.717287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.717298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.717639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.717649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.718026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.718038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.718351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.718361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.718684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.718695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.719116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.719127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.719465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.719476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.719818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.720173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.720184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.720506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.720516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.720830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.720841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.721143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.721153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.721486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.721496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.721821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.721833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.722173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.722183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.722528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.722538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.722863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.722873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.723104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.723115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.723409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.723419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.723768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.723779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.724189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.724200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.724465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.724475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.724787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.724798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.725179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.725189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.725509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.725519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.725842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.725855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.726202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.726212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.726563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.726573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.726889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.726900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.727228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.727238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.727579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.727590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.727929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.727939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.728000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.728010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.728383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.728393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.728714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.728724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.728955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.728966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.729263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.729273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.729593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.729604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.729928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.729938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.782 [2024-07-15 14:16:33.730283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.782 [2024-07-15 14:16:33.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.782 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.730649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.730660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.731037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.731047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.731369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.731381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.731721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.731731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.732085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.732096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.732424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.732436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.732836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.732848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.733195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.733205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.733526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.733537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.733723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.733734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.734068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.734081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.734413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.734424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.734769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.734783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.734972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.734982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.735296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.735306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.735514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.735524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.735807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.735818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.736141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.736151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.736510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.736520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.736743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.736757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.737072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.737082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.737411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.737422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.737747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.737767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.738116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.738128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.738475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.738486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.738814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.738825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.739146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.739158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.739494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.739504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.739825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.739836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.740134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.740145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.740475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.740487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.740553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.740564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.740888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.740899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.741248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.741258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.741582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.741593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.741927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.783 [2024-07-15 14:16:33.741938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.783 qpair failed and we were unable to recover it. 00:30:35.783 [2024-07-15 14:16:33.742284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.742296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.742620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.742630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.742975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.742986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.743360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.743371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.743722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.743733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.743924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.743935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.744265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.744276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.744623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.744975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.744986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.745307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.745660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.745671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.746010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.746021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.746367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.746378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.746697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.746708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.747024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.747035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.747370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.747381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.747726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.747737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.748055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.748068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.748387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.748398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.748741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.748755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.749071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.749081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.749271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.749282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.749621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.749632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.749854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.749866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.750040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.750050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.750254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.750264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.750553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.750564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.750905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.750917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.751261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.751272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.751592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.751603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.751927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.751938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.752279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.752289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.752601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.752613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.752992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.753002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.753323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.753334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.753676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.753687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.754010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.754021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.754384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.754395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.754716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.754727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.755063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.755073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.755386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.755397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.784 qpair failed and we were unable to recover it. 00:30:35.784 [2024-07-15 14:16:33.755719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.784 [2024-07-15 14:16:33.755730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.756040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.756052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.756392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.756403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.756745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.756997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.757007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.757367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.757378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.757756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.757767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.758069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.758081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.758414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.758424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.758748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.758763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.759107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.759119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.759468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.759479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.759803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.759815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.760175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.760185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.760532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.760543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.760873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.760885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.761222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.761232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.761581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.761591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.761786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.761797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.762141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.762152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.762471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.762482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.762794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.763010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.763021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.763327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.763337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.763657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.763668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.763884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.763896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.764247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.764257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.764562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.764572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.764900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.764911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.765231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.765243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.765586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.765596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.765938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.765950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.766271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.766282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.766616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.766627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.766814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.766827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.767148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.767159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.767486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.767497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.767815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.767826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.768219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.768229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.768572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.768584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.768985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.768995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.785 qpair failed and we were unable to recover it. 00:30:35.785 [2024-07-15 14:16:33.769314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.785 [2024-07-15 14:16:33.769326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.769660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.769670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.769994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.770006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.770327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.770342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.770679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.770690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.771040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.771051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.771396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.771407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.771722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.771733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.772111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.772123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.772471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.772481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.772713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.772723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.772914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.772926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.773251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.773261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.773599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.773609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.773942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.773953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.774141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.774151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.774472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.774482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.774821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.774832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.775192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.775511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.775522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.775840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.775851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.776197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.776208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.776452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.776462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.776832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.776843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.777028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.777037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.777392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.777403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.777749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.777764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.778067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.778079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.778268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.778278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.778601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.778612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.778950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.778962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.779290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.779301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.779622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.779632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.779934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.779945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.780159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.780169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.780488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.780499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.780818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.780829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.781189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.781200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.781539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.781551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.781875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.781886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.782223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.782234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.786 [2024-07-15 14:16:33.782574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.786 [2024-07-15 14:16:33.782585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.786 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.782927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.782938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.783146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.783156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.783470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.783481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.783819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.784022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.784033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.784329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.784340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.784521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.784531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.784885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.784896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.785194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.785205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.785548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.785559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.785890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.785902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.786245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.786255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.786603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.786614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.786956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.787294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.787304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.787651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.787662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.788061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.788072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.788263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.788273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.788515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.788534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.788821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.788832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.789045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.789056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.789376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.789386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.789705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.789715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.790052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.790063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.790395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.790406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.790729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.790741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.791057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.791067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.791411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.791423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.791727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.791738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.792063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.792077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.792401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.792413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.792744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.792759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.793067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.793078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.793400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.793411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.793733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.793744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.794120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.787 [2024-07-15 14:16:33.794131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.787 qpair failed and we were unable to recover it. 00:30:35.787 [2024-07-15 14:16:33.794444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.794455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.794785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.794796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.795126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.795138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.795473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.795483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.795828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.795838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.796175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.796186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.796497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.796507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.796785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.796796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.797117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.797127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.797453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.797465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.797778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.797789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.798088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.798098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.798448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.798458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.798812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.798823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.799029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.799384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.799395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.799583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.799594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.799932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.799942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.800291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.800301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.800628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.800640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.800982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.800993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.801320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.801332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.801653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.801664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.801999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.802010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.802211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.802221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.802556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.802567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.802888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.802899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.803247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.803257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.803564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.803574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.803984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.803995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.804311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.804322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.804662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.804673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.804903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.804913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.805227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.805238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.805562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.805573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.805802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.805814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.806002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.806012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.806209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.806220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.806444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.806454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.806808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.806819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.807124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.807136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.788 [2024-07-15 14:16:33.807468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.788 [2024-07-15 14:16:33.807478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.788 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.807693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.807703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.808038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.808049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.808374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.808385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.808718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.808730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.809070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.809081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.809422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.809432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.809736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.809747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.809967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.809978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.810163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.810173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.810460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.810470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.810788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.810800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.811132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.811143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.811460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.811471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.811811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.811822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.812170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.812180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.812502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.812514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.812838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.812849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.813147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.813159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.813487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.813499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.813802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.813816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.814001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.814011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.814211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.814222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.814535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.814546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.814858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.814870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.815194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.815205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.815524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.815534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.815877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.815888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.816229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.816241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.816562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.816573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.816907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.816917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.817241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.817252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.817579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.817589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.817911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.817922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.818235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.818580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.818591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.818915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.818927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.819260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.819271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.819611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.819623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.819937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.789 [2024-07-15 14:16:33.819949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.789 qpair failed and we were unable to recover it. 00:30:35.789 [2024-07-15 14:16:33.820149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.820160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.820447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.820458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.820846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.820857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.821225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.821237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.821555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.821568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.821881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.821892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.822230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.822241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.822587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.822599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.822949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.822960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.823278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.823289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.823630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.823641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.823977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.823988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.824316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.824329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.824647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.824658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.825006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.825018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.825400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.825411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.825777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.825788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.826115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.826127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.826469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.826813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.826825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.827173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.827186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.827508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.827522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.827861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.827871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.828217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.828227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.828618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.828628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.828915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.828926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.829230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.829240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.829545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.829557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.829883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.829895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.830215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.830225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.830551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.830563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.830905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.830916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.831260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.831271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.831565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.831575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.831901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.831912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.832231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.832241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.832456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.832466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.832785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.832796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.833129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.833141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.833482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.833492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.833697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.833706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.790 qpair failed and we were unable to recover it. 00:30:35.790 [2024-07-15 14:16:33.834050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.790 [2024-07-15 14:16:33.834061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.834398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.834409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.834757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.834769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.835088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.835099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.835421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.835432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.835768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.835779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.836119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.836129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.836415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.836428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.836747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.836764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.836959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.836969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.837188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.837473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.837484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.837763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.837774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.838054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.838064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.838412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.838422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.838724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.838734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.839418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.839429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.839811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.839822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.840163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.840174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.840496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.840507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.840878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.840890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.841197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.841209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.841528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.841539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.841880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.841890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.842222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.842232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.842577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.842588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.842881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.842892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.843228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.843240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.843586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.843596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.843825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.844099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.844109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.844427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.844438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.844774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.844784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.844980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.844991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.845333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.845344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.845665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.845677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.791 [2024-07-15 14:16:33.845985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.791 [2024-07-15 14:16:33.845996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.791 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.846348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.846358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.846697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.846707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.847030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.847042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.847277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.847288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.847619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.847631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.847963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.847975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.848168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.848179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.848468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.848479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.848823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.848834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.849193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.849204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.849531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.849544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.849885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.849897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.850330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.850341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.850662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.850673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.851007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.851019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.851353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.851364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.851707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.851719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.852076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.852088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.852388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.852399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.852461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.852471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.852784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.852796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.853110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.853121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.853486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.853498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.853832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.853844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.854145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.854156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.854476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.854487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.854809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.854820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.855151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.855163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.855475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.855486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.855811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.855823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.856162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.856174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.856508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.856868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.856879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.857211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.857222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.857543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.857555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.857761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.857773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.858078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.858089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.858493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.858506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.858823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.858835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.792 qpair failed and we were unable to recover it. 00:30:35.792 [2024-07-15 14:16:33.859154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.792 [2024-07-15 14:16:33.859166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.859520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.859531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.859861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.859874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.860205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.860216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.860559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.860571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.860935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.860946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:35.793 [2024-07-15 14:16:33.861287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.793 [2024-07-15 14:16:33.861298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:35.793 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.861612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.861625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.861806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.862120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.862132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.862354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.862366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.862590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.862602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.862894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.862906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.863222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.863233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.863555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.863566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.863965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.863977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.864311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.864322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.864662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.864673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.865004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.865017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.865283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.865295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.865714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.865725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.866049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.866060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.866398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.866747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.866766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.867019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.867031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.867157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.867169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.867492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.867504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.867823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.867835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.868157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.868169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.868520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.868532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.868691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.868703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.869024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.869036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.869382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.869393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.869731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.869742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.870114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.870125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.870448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.870460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.870675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.870687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.870997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.871008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.871340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.871351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.871750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.871768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.872051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.872061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-07-15 14:16:33.872409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-07-15 14:16:33.872420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.872748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.872765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.873144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.873154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.873338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.873348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.873654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.873665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.873857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.873867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.874162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.874172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.874504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.874514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.874737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.874747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.875061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.875071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.875410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.875420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.875758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.875769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.875840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.875848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.876168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.876179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.876540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.876550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.876878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.876889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.877241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.877251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.877602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.877613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.877965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.877976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.878301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.878312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.878666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.878676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.878998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.879009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.879337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.879347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.879676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.879688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.880008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.880018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.880228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.880241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.880569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.880580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.880899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.880909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.881256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.881268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.881618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.881629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.881956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.881968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.882178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.882188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.882473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.882484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.882831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.882842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.883037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.883048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.883376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.883387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.883769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.883780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.884152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.884498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.884510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.884831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.884842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.885038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.885049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.885235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.885246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.885552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.885562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.885961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.885972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.886316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.886327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.886671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.886682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.887008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.887020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.887354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.887364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.887674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.887686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.888095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.888106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.888430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.888763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.888774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.889169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.889179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.889501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.889512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.889825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.889836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-07-15 14:16:33.890165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-07-15 14:16:33.890175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.890368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.890378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.890704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.890715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.891032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.891042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.891356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.891368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.891569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.891580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.891895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.891905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.892250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.892260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.892605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.892615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.892905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.892916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.893257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.893268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.893456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.893468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.893744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.893759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.894135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.894145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.894336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.894539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.894550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.894873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.894884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.895158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.895168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.895361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.895372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.895745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.895761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.896034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.896044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.896344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.896356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.896578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.896588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.896909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.896920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.897245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.897256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.897551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.897894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.897905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.898115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.898125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.898222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.898232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.898509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.898519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.898814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.898824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.899022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.899032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.899346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.899358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.899693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.899703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.900034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.900047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.900370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.900382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.900710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.900721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.901015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.901026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.901215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.901225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.901530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.901542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.901724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.901735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.901979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.901991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.902071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.902082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.902436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.902447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.902773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.902784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.903108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.903118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.903407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.903419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.903756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.903767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.904094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.904106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.904434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.904445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.904784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.904795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.905108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.905118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.905307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.905318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.905534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.905544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.905866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.905877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.906207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.906217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-07-15 14:16:33.906540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-07-15 14:16:33.906550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.906871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.906883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.907211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.907222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.907563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.907574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.907801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.907811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.907997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.908007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.908334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.908345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.908415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.908423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.908700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.908711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.908996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.909007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.909214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.909224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.909485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.909496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.909834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.909844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.910063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.910073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.910365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.910375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.910738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.910750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.911087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.911099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.911422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.911432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.911759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.911771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.912111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.912121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.912329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.912340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.912658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.912669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.912966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.912977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.913314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.913327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.913646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.913658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.914007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.914018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.914203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.914214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.914406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.914417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.914723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.914734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.915072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.915084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.915413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.915424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.915768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.915779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.916105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.916116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.916428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.916439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.916769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.916781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.917130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.917140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.917332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.917343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.917545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.917555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.917906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.917917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.918239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.918251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.918475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.918486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.918811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.918822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.919149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.919159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.919502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.919513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.919856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.919867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.920189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.920201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.920494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.920505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.920844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.920856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-07-15 14:16:33.921177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-07-15 14:16:33.921188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.921511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.921876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.921887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.922164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.922176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.922494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.922506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.922831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.922842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.923179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.923190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.923527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.923537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.923755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.923766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.924090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.924101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.924436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.924446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.924786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.924796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.925123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.925133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.925525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.925536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.925889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.925900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.926223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.926235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.926578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.926591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.926909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.926922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.927246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.927257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.927595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.927606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.927936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.927948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.928261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.928271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.928589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.928600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.928937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.929178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.929189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.929375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.929385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.929604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.929615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.929669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.929680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.930011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.930022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.930223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.930458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.930470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.930807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.930818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.931146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.931156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.931484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.931494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.931809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.931821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.932171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.932182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.932374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.932384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.932563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.932574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.932899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.932910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.933206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.933217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.933553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.933564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.933765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.933777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.934134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.934144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.934476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.934489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.934781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.934792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.935113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.935124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.935336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.935346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.935687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.935697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.936008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.936020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.936206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.936218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.936551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.936562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.936900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.936912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.937254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.937265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.937586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.937597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.937974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.937985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.938325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.938336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.938683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.938693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-07-15 14:16:33.938881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-07-15 14:16:33.938893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.939185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.939196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.939382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.939394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.939714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.939726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.940035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.940045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.940379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.940390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.940763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.940774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.941092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.941102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.941316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.941327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.941644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.941655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.942006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.942017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.942362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.942373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.942668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.942679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.942974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.942986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.943322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.943333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.943674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.943685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.944052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.944063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.944383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.944735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.944746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.945100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.945111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.945407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.945419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.945745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.945761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.946106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.946116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.946459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.946470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.946790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.946802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.947085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.947095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.947436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.947446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.947749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.947767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.948084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.948095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.948310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.948320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.948615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.948625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.948961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.948973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.949294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.949306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.949634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.949645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.949833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.949844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.950147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.950158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.950385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.950395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.950451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.950462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.950748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.950765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.951080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.951090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.951426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.951437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.951759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.951771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.952102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.952113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.952453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.952463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.952815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.952826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.953192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.953203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.953432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.953442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.953777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.953788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.954112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.954123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.954447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.954458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.954780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.954791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.955129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.955139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.955477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.955488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.955817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.955828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.956208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.956221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.956562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.956573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-07-15 14:16:33.956762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-07-15 14:16:33.956773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.957093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.957104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.957435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.957446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.957669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.957679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.957926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.957938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.958211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.958222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.958548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.958559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.958742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.958763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.959072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.959082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.959405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.959416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.959737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.959747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.960104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.960114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.960462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.960473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.960792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.960803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.961127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.961137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.961474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.961485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.961827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.961839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.962160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.962170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.962455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.962467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.962771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.962782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.963112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.963122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.963445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.963455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.963800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.963811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.964203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.964213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.964498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.964510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.964829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.964840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.965063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.965073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.965408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.965418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.965763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.965774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.966081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.966093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.966280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.966291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.966486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.966497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.966844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.966855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.967189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.967520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.967531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.967867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.967879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.968182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.968193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.968516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.968527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.968841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.968853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.969170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.969182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.969520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.969530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.969876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.969888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.970063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.970073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.970375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.970387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.970736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.970746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.970937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.970948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.971247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.971258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.971599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.971609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.971949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.971960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.972281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.972291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-07-15 14:16:33.972613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-07-15 14:16:33.972624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.972930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.972941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.973234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.973244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.973438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.973448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.973723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.973734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.974105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.974116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.974412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.974424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.974738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.974748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.975053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.975064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.975409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.975420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.975775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.975786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.975975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.975985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.976290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.976300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.976638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.976649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.976984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.976995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.977180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.977190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.977530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.977543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.977881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.977893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.978233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.978244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.978435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.978446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.978738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.978749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.979098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.979109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.979449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.979459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.979780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.979791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.980139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.980149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.980486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.980497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.980841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.980860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.981097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.981424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.981435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.981773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.981783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.982018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.982029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.982365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.982377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.982564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.982575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.982769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.982780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.983081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.983092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.983411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.983423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.983747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.983762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.984066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.984078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.984419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.984429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.984749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.984765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.985098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.985109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.985441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.985451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.985747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.985763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.986024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.986035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.986357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.986369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.986557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.986568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.986892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.986903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.987244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.987255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.987574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.987586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.987922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.987933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.988249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.988261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.988613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.988623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.989010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.989021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.989364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.989375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.989681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.989692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.990020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.990030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-07-15 14:16:33.990364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-07-15 14:16:33.990375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.990571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.990584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.990787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.990798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.991151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.991162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.991480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.991492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.991813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.991824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.992175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.992185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.992479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.992489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.992807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.992818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.993152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.993163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.993514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.993848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.993859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.994189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.994199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.994425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.994436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.994647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.994956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.994967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.995294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.995305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.995647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.995658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.995980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.995992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.996189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.996201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.996493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.996504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.996844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.996855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.997188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.997199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.997512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.997523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.997858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.997869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.998204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.998215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.998560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.998572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.998893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.998903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.999221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.999240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.999545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.999555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:33.999866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:33.999877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.000212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.000223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.000548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.000560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.000897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.000909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.001230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.001242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.001559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.001570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.001770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.002089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.002100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.002395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.002406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.002728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.002739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.002935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.002946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.003234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.003244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.003595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.003606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.003901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.003912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.004102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.004113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.004436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.004447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.004791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.004802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.005098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.005108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.005433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.005444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.005736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.005747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.006121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.006132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.006452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.006463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.006781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.007137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.007495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.007506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.007684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.007694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.008027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-07-15 14:16:34.008038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-07-15 14:16:34.008378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.008388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.008723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.008734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.009044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.009390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.009402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.009759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.009769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.010064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.010256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.010266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.010593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.010603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.010974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.010985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.011333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.011344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.011669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.011681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.012017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.012029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.012402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.012415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.012758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.012770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.012965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.012975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.013304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.013316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.013504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.013516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.013817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.013828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.014171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.014182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.014502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.014513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.014693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.014704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.015018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.015030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.015347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.015358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.015560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.015570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.015911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.015922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.016207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.016217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.016538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.016548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.016866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.016878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.017258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.017268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.017611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.017622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.017932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.017943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.018259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.018271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.018610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.018620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.018943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.018954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.019130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.019141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.019362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.019372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.019563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.019574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.019919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.019930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.020223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.020234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.020406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.020417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.020759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.020770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.021088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.021098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.021422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.021433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.021756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.021767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.022104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.022115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.022356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.022366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.022690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.022701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.022888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.022899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.023246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-07-15 14:16:34.023257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-07-15 14:16:34.023597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.023608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.023793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.023805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.024004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.024015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.024355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.024366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.024719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.024729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.025053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.025063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.025388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.025399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.025741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.025755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.025947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.025957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.026257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.026269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.026588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.026599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.026933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.026944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.027239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.027250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.027547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.027558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.027734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.027744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.028090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.028101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.028414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.028424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.028695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.028706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.029082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.029093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.029431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.029443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.029660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.029671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.029973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.029985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.030178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.030393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.030405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.030730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.030741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.031004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.031015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.031352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.031364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.031700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.031712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.032094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.032105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.032419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.032431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.032740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.032755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.033096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.033109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.033296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.033306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.033525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.033536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.033845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.033856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.034063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.034072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.034364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.034374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.034668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.034678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.034972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.034983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.035312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.035324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.035668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.035679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.035985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.035997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.036190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.036202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.036373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.036383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.036716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.036726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.037082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.037093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.037431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.037441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.037778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.037789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.038015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.038025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.038345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.038356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.038685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.038696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.039023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.039035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.039374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.039384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.039711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.039721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.039917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.039928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.040228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.040239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-07-15 14:16:34.040537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-07-15 14:16:34.040548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.040871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.040882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.041207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.041217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.041556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.041567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.041886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.041896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.042233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.042244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.042561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.042573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.042890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.042900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.043259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.043556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.043566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.043770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.043781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.043968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.043979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.044305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.044316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.044643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.044654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.044991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.045002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.045339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.045350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.045547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.045561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.045877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.046255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.046266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.046644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.046655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.046990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.047002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.047320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.047331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.047543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.047554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.047892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.047903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.048097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.048107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.048402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.048412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.048731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.048743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.049092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.049102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.049448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.049459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.049769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.049781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.050086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.050096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.050390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.050401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.050761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.050772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.051087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.051099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.051386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.051396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.051740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.051756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.052087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.052098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.052418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.052429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.052743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.052758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.053096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.053107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.053496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.053506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.053818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.053831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.054187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.054198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.054543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.054881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.054891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.055221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.055232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.055561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.055572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.055906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.055918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.056061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.056398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.056409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.056715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.056725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.057064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.057076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.057426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.057437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.057813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.057824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.058189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.058199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.058537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.058547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-07-15 14:16:34.058874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-07-15 14:16:34.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.059254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.059265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.059573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.059584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.059927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.059939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.060278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.060289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.060610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.060621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.060933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.060944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.061261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.061272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.061498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.061509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.061643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.061653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.061964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.061976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.062292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.062302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.062638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.062649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.062975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.062986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.063181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.063192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.063535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.063547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.063879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.063890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.064238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.064570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.064581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.064907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.064918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.065261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.065272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.065580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.065590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.065800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.065810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.066141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.066151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.066488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.066498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.066849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.066860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.067207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.067217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.067539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.067550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.067741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.067761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.068054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.068065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.068383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.068394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.068736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.068747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.069054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.069066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.069299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.069632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.069643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.069834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.069845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.070186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.070197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.070530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.070541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.070727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.070737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.071031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.071042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.071373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.071383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.071730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.071741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.072065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.072075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.072404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.072414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.072757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.072768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.073129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.073140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.073454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.073464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.073679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.074018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.074029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.074326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.074336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.074697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.074708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.075026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.075037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.075375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.075386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.075735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.075746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.076066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.076078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-07-15 14:16:34.076403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-07-15 14:16:34.076417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.076757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.076769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.077109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.077120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.077450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.077462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.077798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.077809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.078127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.078138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.078323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.078334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.078656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.078667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.078996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.079007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.079345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.079356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.079732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.079743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.080090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.080422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.080628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.080638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.080943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.080954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.081301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.081312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.081633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.081643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.081980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.082371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.082382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.082702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.082713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.083040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.083389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.083400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.083695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.083707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.084013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.084024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.084362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.084374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.084710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.084721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.085066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.085078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.085404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.085416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.085740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.085756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.086096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.086108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.086404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.086590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.086601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.086819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.086830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.087140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.087150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.087436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.087448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.087720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.087731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.088053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.088064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.088409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.088420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.088637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.088647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.088978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.089317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.089327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.089670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.089682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.090004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.090335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.090345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.090666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.090678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.090999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.091010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.091320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.091331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.091652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.091662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.092004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.092016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.092352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.092363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.092552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.092563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.092866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-07-15 14:16:34.092877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-07-15 14:16:34.093169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.093188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.093362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.093372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.093714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.093725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.093920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.093931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.094263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.094273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.094466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.094477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.094776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.094788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.095107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.095118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.095438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.095450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.095784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.095796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.096124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.096136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.096459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.096803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.096815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.097152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.097355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.097365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.097553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.097563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.097891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.097904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.098242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.098254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.098621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.098632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.099055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.099066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.099384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.099396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.099738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.100091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.100102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.100426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.100650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.100661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.101002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.101013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.101206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.101217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.101529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.101540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.101860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.101870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.102205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.102216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.102570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.102581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.102989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.103000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.103314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.103326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.103667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.103678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.103996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.104007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.104327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.104337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.104673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.104684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.104881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.104891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.105219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.105230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.105552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.105563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.105884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.105895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.106239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.106250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.106594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.106604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.106904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.106915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.107217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.107227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.107546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.107556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.107908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.107919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.108261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.108271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.108489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.108499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.108844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.108855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.109140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.109150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.109499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.109510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.109835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.110167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.110178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.110384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.110394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.110709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.110719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.111039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-07-15 14:16:34.111051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-07-15 14:16:34.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.111399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.111740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.111755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.112064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.112074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.112264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.112274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.112576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.112588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.112839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.112850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.113184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.113195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.113517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.113720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.113730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.114069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.114080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.114403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.114413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.114643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.114653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.114951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.114961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.115245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.115255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.115569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.115580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.115885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.115895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.116240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.116251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.116599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.116609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.116951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.116962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.117153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.117163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.117458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.117469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.117818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.117829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.118185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.118196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.118524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.118535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.118852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.118864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.119184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.119194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.119404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.119414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.119582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.119594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.119822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.119833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.120155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.120166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.120461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.120472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.120797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.120809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.121131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.121142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.121326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.121336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.121571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.121581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.121903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.121914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.122243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.122254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.122451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.122461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.122788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.122799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.123131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.123142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.123487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.123498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.123851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.123863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.124209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.124219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.124544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.124555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.124882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.124892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.125234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.125245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.125566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.125577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.125768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.125779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.126070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.126081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.126386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.126397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.126728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.126738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.126966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.126978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.127254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.127265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.127454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.127465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.127819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.127830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.128159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-07-15 14:16:34.128169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-07-15 14:16:34.128558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.128569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.128798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.128809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.129097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.129107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.129454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.129465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.129816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.129826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.130157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.130167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.130468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.130479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.130795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.130806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.131078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.131088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.131399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.131409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.131731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.131742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.132088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.132098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.132438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.132451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.132799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.132810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.133148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.133158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.133488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.133499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.133691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.133702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.134033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.134044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.134368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.134379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.134703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.134713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.134924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.134936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.135265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.135276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.135628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.135639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.135864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.135874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.136156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.136167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.136511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.136522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.136857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.136868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.137139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.137149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.137359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.137370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.137658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.137668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.138020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.138031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.138370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.138380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.138576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.138586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.138913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.138924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.139274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.139284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.139644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.139654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.139843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.139856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.140221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.140231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.140582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.140593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.141309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.141321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.141669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.141680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.142042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.142053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.142374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.142384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.142706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.142716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.143060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.143071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.143287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.143298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.143615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.143627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.143981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.143991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.144332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.144342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.144562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-07-15 14:16:34.144574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-07-15 14:16:34.144910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.144921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.145343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.145353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.145537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.145548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.145866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.145879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.146251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.146261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.146562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.146573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.146906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.146917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.147265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.147276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.147503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.147513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.147699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.147710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.147993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.148004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.148345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.148356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.148678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.148688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.149018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.149368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.149379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.149741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.149755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.149951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.149962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.150255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.150265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.150537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.150548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.150899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.150910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.151118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.151128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.151459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.151818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.151829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.152169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.152179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.152521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.152532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.152854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.152865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.153205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.153215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.153561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.153571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.153894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.153905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.154107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.154305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.154317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.154523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.154533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.154872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.154883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.155114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.155125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.155470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.155480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.155626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.155637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.155979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.155989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.156330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.156341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.156688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.156698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.156956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.156967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.157300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.157309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.157502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.157513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.157840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.157851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.158151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.158161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.158350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.158361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.158662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.158673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.158995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.159007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.159351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.159362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.159554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.159566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.159858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.159870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.160200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.160211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.160521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.160533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.160875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.160887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.161281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.161292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.161637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.161648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.161984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.161996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.082 qpair failed and we were unable to recover it. 00:30:36.082 [2024-07-15 14:16:34.162309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.082 [2024-07-15 14:16:34.162320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.162512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.162523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.162859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.162870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.163210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.163419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.163430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.163648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.163659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.163956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.163967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.164165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.164177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.164513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.164525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.164723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.164733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.165067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.165079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.165383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.165712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.165724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.166051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.166063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.166399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.166410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.083 [2024-07-15 14:16:34.166748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.083 [2024-07-15 14:16:34.166765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.083 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.167066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.167079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.167415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.167426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.167770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.167782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.168101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.168112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.168454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.168465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.168779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.168790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.169034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.169046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.169377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.169388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.169738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.169749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.170160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.170177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.170528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.170539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.170760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.171111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.171121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.171311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.171324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.171517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.171528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.171869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.171880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.172197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.172207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.172527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.172538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.172889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.172900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.173247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.173258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.173585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.173596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.173916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.173928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.174204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.174215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.352 [2024-07-15 14:16:34.174423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.352 [2024-07-15 14:16:34.174434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.352 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.174799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.174810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.175171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.175182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.175517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.175530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.175759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.175770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.176064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.176075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.176398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.176408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.176748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.176763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.177000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.177011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.177278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.177289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.177450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.177461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.177817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.177828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.178142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.178152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.178319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.178329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.178512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.178523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.178816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.178827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.179037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.179048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.179382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.179392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.179731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.179742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.180083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.180094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.180436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.180446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.180758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.180770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.181109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.181412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.181422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.181761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.181772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.182075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.182086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.182405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.182416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.182758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.182769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.183146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.183156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.183380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.183391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.183710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.183721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.184064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.184075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.184415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.184425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.184716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.184727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.184933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.184944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.185261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.185271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.353 qpair failed and we were unable to recover it. 00:30:36.353 [2024-07-15 14:16:34.185570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.353 [2024-07-15 14:16:34.185581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.185908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.185919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.186242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.186253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.186596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.186606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.186940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.186951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.187259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.187269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.187423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.187433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.187771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.187781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.188124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.188136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.188473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.188483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.188828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.188840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.189186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.189196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.189541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.189551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.189863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.189874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.190193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.190204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.190547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.190558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.190860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.190871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.191200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.191211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.191400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.191412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.191725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.191735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.192048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.192059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.192379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.192390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.192713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.192724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.192953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.192965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.193267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.193278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.193604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.193615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.193933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.193944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.194288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.194299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.194650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.194997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.195008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.195166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.195178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.195510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.195520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.354 [2024-07-15 14:16:34.195861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.354 [2024-07-15 14:16:34.195872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.354 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.196174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.196185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.196525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.196536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.196875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.196888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.197180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.197191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.197539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.197550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.197891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.197903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.198195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.198206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.198523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.198536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.198845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.198855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.199212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.199224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.199563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.199574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.199888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.199899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.200099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.200110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.200407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.200418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.200760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.200771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.201102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.201113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.201308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.201319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.201617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.201628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.201977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.201989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.202183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.202195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.202376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.202387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.202682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.202693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.202969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.202980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.203283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.203294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.203482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.203493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.203771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.203782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.204120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.204130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.204486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.204496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.204822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.204833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.205034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.205044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.205255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.205267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.205584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.205594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.205918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.205929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.206141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.206151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.206334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.206344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.206653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.206663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.206948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.355 [2024-07-15 14:16:34.206959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.355 qpair failed and we were unable to recover it. 00:30:36.355 [2024-07-15 14:16:34.207242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.207252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.207554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.207564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.207758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.207770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.208107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.208117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.208457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.208468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.208794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.208805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.208977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.208990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.209305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.209315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.209657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.209986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.209998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.210185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.210196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.210411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.210421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.210755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.210766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.211071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.211081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.211434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.211445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.211769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.211779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.212119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.212129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.212463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.212474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.212816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.212828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.213176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.213186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.213507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.213518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.213856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.213867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.214217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.214227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.214541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.214552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.214910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.214921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.215104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.215115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.215433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.215444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.215762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.215773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.216107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.216117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.216454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.216465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.216811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.216821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.217107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.217117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.217434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.217444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.217788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.217799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.218134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.218144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.218470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.218481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.218675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.218685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.218964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.218976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.219276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.219287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.219610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.219621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.356 qpair failed and we were unable to recover it. 00:30:36.356 [2024-07-15 14:16:34.219929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.356 [2024-07-15 14:16:34.219939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.220285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.220295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.220640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.220650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.220991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.221001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.221325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.221336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.221672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.221682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.222074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.222086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.222406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.222417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.222736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.222747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.223076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.223086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.223466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.223478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.223788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.223798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.224010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.224021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.224361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.224371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.224718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.224728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.225096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.225433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.225443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.225789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.225800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.226125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.226136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.226301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.226312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.226620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.226631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.226923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.226934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.227234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.227245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.227547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.227872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.227882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.228231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.228242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.228436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.228446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.228771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.228782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.229068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.229078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.229382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.229765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.229777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.230094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.230104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.230422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.230432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.230776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.230787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.357 qpair failed and we were unable to recover it. 00:30:36.357 [2024-07-15 14:16:34.231189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.357 [2024-07-15 14:16:34.231201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.231462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.231472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.231801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.231812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.232150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.232160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.232503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.232514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.232834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.232845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.233134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.233143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.233491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.233501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.233837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.234188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.234198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.234519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.234529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.234867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.234878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.235226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.235237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.235557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.235568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.235828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.236176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.236187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.236531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.236910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.236920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.237243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.237254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.237552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.237564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.237895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.237905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.238229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.238240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.238564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.238574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.238948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.238959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.239266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.239276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.239601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.239613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.239827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.239838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.240175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.240185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.240525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.240536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.240856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.240867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.241198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.241209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.241550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.241561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.241763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.241773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.242102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.242112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.242461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.242471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.242798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.242809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.243124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.243135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.243458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.243468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.243791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.244130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.244141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.244485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.244496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.358 [2024-07-15 14:16:34.244832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.358 [2024-07-15 14:16:34.244843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.358 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.245188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.245199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.245541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.245552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.245898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.245909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.246097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.246108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.246446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.246457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.246798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.246809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.247033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.247043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.247363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.247373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.247693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.247704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.248043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.248054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.248339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.248349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.248671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.248681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.249003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.249015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.249350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.249361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.249657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.249667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.249840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.249851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.250155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.250165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.250485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.250496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.250841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.250852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.251194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.251204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.251530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.251540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.251741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.251756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.252064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.252075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.252431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.252442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.252778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.252789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.253129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.253140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.253482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.253494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.253812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.253823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.254144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.254154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.254491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.254501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.254850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.254861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.255202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.255213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.255534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.255545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.255866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.255877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.256083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.256094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.256444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.256454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.256675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.256686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.257013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.257023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.257208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.257219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.257568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.257578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.359 qpair failed and we were unable to recover it. 00:30:36.359 [2024-07-15 14:16:34.257894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.359 [2024-07-15 14:16:34.257904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.258215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.258226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.258572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.258582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.258906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.258916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.259243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.259254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.259607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.259617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.259813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.259825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.260135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.260146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.260466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.260477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.260815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.260826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.261142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.261152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.261468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.261796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.261807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.262129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.262139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.262484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.262495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.262821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.262832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.263102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.263113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.263442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.263452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.263798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.263809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.264123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.264133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.264454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.264464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.264804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.264814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.265121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.265131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.265471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.265482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.265793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.265803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.266133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.266144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.266495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.266505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.266823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.266837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.267158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.267169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.267362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.267373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.267680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.267691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.267857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.267869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.268207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.268217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.268524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.268535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.268884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.268895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.269213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.269224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.269546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.269556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.269900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.269911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.270275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.270286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.270630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.270641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.270964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-07-15 14:16:34.270975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-07-15 14:16:34.271321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.271332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.271675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.271685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.271999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.272010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.272332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.272342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.272717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.272727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.273051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.273062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.273288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.273298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.273475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.273485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.273767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.273778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.274054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.274065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.274405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.274416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.274778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.274789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.275127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.275138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.275327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.275340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.275607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.275618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.275927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.275938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.276279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.276289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.276617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.276627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.276969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.276979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.277299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.277648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.277659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.277976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.277988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.278302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.278313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.278629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.278640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.278943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.278954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.279234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.279244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.279570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.279580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.279786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.279796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.280136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.280146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.280495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.280505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.280834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.280844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.281117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.281127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.281481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.281830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.281841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.282212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.282222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-07-15 14:16:34.282536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-07-15 14:16:34.282546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.282888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.282899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.283224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.283234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.283589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.283599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.283930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.284268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.284278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.284617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.284628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.284824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.284836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.285170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.285180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.285523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.285533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.285854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.285865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.286220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.286230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.286553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.286563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.286898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.287256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.287266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.287589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.287600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.287931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.287942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.288286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.288296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.288648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.288659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.288889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.288902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.289105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.289116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.289474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.289821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.289832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.290185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.290195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.290515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.290526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.290863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.290875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.291183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.291193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.291524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.291534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.291859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.291870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.292213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.292223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.292571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.292582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.292906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.292917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.293236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.293247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.293589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.293598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.293899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.293910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.294219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.294229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.294584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.294594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.294796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.294808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.295136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.295147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.295477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.295487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.295809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.295820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.296165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-07-15 14:16:34.296176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-07-15 14:16:34.296520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.296530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.296843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.296853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.297187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.297198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.297549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.297559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.297973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.297986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.298304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.298315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.298630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.298640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.298825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.298837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.299149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.299160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.299482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.299492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.299817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.299828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.300148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.300158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.300507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.300517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.300836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.300847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.301168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.301178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.301514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.301525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.301878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.301890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.302233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.302244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.302430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.302441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.302763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.302774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.303102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.303112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.303432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.303443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.303631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.303643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.303943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.303954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.304260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.304271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.304618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.304628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.304968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.304979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.305317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.305327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.305530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.305540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.305831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.305841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.306170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.306180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.306529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.306882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.307240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.307250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.307537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.307548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.307886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.307897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.308242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.308252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.308578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.308589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.308776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.308787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.309028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.309038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.309466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.309476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-07-15 14:16:34.309798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-07-15 14:16:34.309809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.310098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.310108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.310446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.310457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.310808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.310819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.311154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.311167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.311492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.311503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.311841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.311852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.312156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.312167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.312358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.312369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.312664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.312674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.312953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.312964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.313275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.313286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.313478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.313489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.313724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.313734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.313955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.313966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.314273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.314283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.314617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.314628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.315023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.315034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.315221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.315233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.315574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.315584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.315856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.315866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.316185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.316195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.316546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.316556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.316750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.316765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.317095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.317105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.317425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.317436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.317618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.317629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.317933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.317944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.318228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.318238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.318463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.318473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.318738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.318748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.319080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.319091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.319433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.319444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.319740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.320055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.320066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.320415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.320426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.320748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.320763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.320954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.320966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.321254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.321265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.321606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.321617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.321921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-07-15 14:16:34.321931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-07-15 14:16:34.322250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.322260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.322599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.322609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.322947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.322958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.323294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.323304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.323616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.323626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.323932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.323943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.324293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.324304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.324614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.324624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.324978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.324989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.325375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.325692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.325703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.326013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.326024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.326307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.326318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.326656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.326667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.326987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.327316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.327326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.327651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.327661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.328056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.328068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.328395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.328406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.328603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.328613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.328971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.328981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.329320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.329330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.329676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.329687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.330013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.330024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.330346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.330357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.330695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.330706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.331021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.331032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.331356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.331366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.331687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.331697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.332034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.332045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-07-15 14:16:34.332351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-07-15 14:16:34.332362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.332658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.332670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.332985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.332996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.333258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.333270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.333617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.333627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.333968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.333980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.334301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.334311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.334647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.334657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.334850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.334861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.335146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.335156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.335504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.335515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.335882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.335892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.336204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.336214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.336534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.336545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.336741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.336754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.337107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.337117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.337469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.337818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.337830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.338170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.338180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.338544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.338554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.338892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.338903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.339220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.339230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.339553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.339563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.339765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.339775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.339958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.339968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.340309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.340320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.340665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.340675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.341019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.341030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.341370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.341380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.341726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.342061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.342072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.342393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.342404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.342781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.342791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.343095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.343106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.343434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.343445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.343768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-07-15 14:16:34.343779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-07-15 14:16:34.344127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.344138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.344481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.344492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.344814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.344824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.345162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.345173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.345515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.345525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.345863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.345873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.346222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.346232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.346555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.346565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.346912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.346924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.347255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.347266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.347589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.347600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.347907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.347917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.348263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.348274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.348617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.348628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.348961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.348971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.349290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.349301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.349640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.349650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.349981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.349992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.350317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.350328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.350725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.350736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.350921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.350934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.351320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.351331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.351654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.351665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.352004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.352015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.352235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.352246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.352449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.352459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.352761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.352773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.353018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.353028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.353239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.353250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.353556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.353567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.353880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.353891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.354260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.354271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.354565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.354576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.354875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.354889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.355239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.355249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.355439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.355449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.355745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-07-15 14:16:34.355762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-07-15 14:16:34.356076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.356087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.356409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.356420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.356759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.356770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.356982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.356993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.357336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.357347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.357641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.357652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.358035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.358045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.358231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.358243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.358556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.358567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.358958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.358969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.359292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.359303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.359641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.359652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.359786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.359796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.360104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.360114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.360436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.360446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.360788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.360798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.361120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.361131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.361451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.361461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.361768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.361780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.362132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.362142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.362500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.362510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.362830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.362840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.363169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.363179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.363392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.363402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.363736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.363746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.363948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.363959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.364274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.364284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.364625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.364636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.364951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.364962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.365279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.365290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.365493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.365503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.365809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.365820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.365996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.366006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.366313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.366323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.366657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.366668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.367013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.367025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.367375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-07-15 14:16:34.367386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-07-15 14:16:34.367707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.367719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.367823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.367833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.368362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.368451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.369011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.369099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.369558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.369870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.369914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.370278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.370290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.370597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.370608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.370937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.370948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.371271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.371281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.371433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.371444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.371710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.371720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.371965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.371976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.372294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.372305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.372648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.372659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.372988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.372999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.373342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.373353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.373678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.373688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.374018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.374029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.374367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.374378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.374721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.374731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.375112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.375124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.375445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.375455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.375791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.375801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.376181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.376192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.376516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.376526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.376881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.376891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.377197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.377210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.377560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.377570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.377890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.377902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.378244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.378254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.378626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.378637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.378863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.378873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-07-15 14:16:34.379203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-07-15 14:16:34.379215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.379560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.379913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.379923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.380218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.380229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.380546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.380556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.380902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.380913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.381261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.381273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.381504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.381515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.381858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.381869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.382210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.382221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.382563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.382574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.382773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.382783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.383090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.383100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.383419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.383429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.383772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.383783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.384177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.384189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.384372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.384382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.384708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.384718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.385058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.385069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.385421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.385432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.385707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.385718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.386041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.386052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.386379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.386389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.386740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.386754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.387053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.387064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.387396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.387407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.387783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.387998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.388008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.388190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.388201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.388434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.388444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.388761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.388771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.389076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.389086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.389408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.389419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.389609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-07-15 14:16:34.389621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-07-15 14:16:34.389910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.389921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.390236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.390247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.390566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.390577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.390897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.390908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.391248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.391258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.391611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.391621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.391894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.391905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.392288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.392298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.392637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.392648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.392977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.392988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.393312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.393322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.393646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.393658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.393997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.394008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.394356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.394366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.394693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.394705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.395052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.395273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.395283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.395588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.395599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.395954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.395965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.396104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.396113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.396393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.396404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.396723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.396733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.397060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.397071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.397390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.397400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.397737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.397747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.397942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.397954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.398288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.398299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.398589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.398599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.398933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.398945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.399258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.399268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.399590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.399600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.399922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.399933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.400239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.400250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.400453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.400464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.400774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.400785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.401082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.401092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-07-15 14:16:34.401433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-07-15 14:16:34.401444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.401795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.401806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.402108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.402119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.402444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.402454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.402668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.402678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.403024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.403036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.403359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.403369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.403704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.403714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.404059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.404070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.404317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.404328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.404636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.404647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.404989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.404999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.405348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.405358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.405697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.405708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.406019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.406030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.406334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.406344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.406532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.406543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.406735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.406746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.407120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.407130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.407447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.407457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.407768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.408110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.408120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.408313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.408325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.408623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.409012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.409023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.409371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.409382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.409664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.409674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.409978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.409989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.410294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.410305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.410649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.410660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.411000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.411011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.411329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.411340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.411526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.411538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.411904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.411917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.412302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.412313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.412701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-07-15 14:16:34.412712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-07-15 14:16:34.413052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.413062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.413405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.413416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.413729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.413739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.414058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.414372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.414382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.414689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.414701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.414919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.414930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.415227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.415238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.415542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.415553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.415874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.415885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.416216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.416226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.416542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.416553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.416893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.417287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.417297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.417607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.417618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.417907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.417918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.418253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.418264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.418608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.418620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.418964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.418974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.419289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.419299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.419639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.419650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.419978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.419988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.420318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.420328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.420653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.420663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.420965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.420978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.421361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.421372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.421687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.421697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.422017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.422027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.422313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.422324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.422668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.422679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.423010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.423021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.423341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.423352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.423694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.424107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.424118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.424426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.424436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.424745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.424763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.424969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.424979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.425259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.425269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.425588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.425599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.425910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.425921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-07-15 14:16:34.426262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-07-15 14:16:34.426272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.426626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.426636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.426965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.426976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.427267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.427277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.427581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.427592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.427932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.427942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.428281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.428292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.428610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.428620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.428963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.428974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.429313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.429324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.429650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.429661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.429994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.430004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.430343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.430354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.430667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.430678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.431027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.431037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.431358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.431368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.431677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.432016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.432027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.432348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.432358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.432668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.432678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.432991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.433002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.433296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.433307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.433627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.433638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.433824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.433835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.434155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.434166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.434508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.434520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.434833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.434843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.435047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.435058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.435382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.435392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.435733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.435744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.436071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.436082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.436401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.436412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.436725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.436736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.437055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.437065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.437288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.437298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.437622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.437633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.437976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.437987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.438334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.438345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.438661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.438671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.438950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.438960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.439284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.439294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-07-15 14:16:34.439648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-07-15 14:16:34.439659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.439855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.439866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.440196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.440207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.440556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.440567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.440919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.440930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.441266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.441276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.441571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.441581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.441899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.441910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.442224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.442235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.442557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.442567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.442886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.442897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.443109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.443121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.443365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.443375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.443677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.443688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.443881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.443892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.444211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.444222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.444573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.444583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.444898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.444909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.445224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.445235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.445524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.445534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.445880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.445891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.446212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.446222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.446544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.446555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.446892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.446902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.447244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.447255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.447552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.447564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.447886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.447897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.448214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.448225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.448576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.448909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.448920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.449230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.449241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.449563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.449574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.449922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.449932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.450269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.450280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.450616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-07-15 14:16:34.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-07-15 14:16:34.450934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.450945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.451238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.451249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.451577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.451588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.451987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.451999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.452306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.452316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.452511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.452522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.452853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.452863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.453189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.453199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.453507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.453518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.453823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.453834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.454173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.454183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.454508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.454920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.454933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.455223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.455234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.455571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.455582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.455907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.455918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-07-15 14:16:34.456234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-07-15 14:16:34.456245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.456600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.456940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.456951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.457272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.457593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.457604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.457949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.457961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.458277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.458287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.458641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.458977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.458989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.459331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.459342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.459658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.459669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.459945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.459955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.460255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.460265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.460608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.460619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.460902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.460913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.461240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.461250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.461559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.461570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.461800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.461812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.462166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.462176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.462478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.462488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.462851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.462861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.463181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.463192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.463514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.463525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.463889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.463900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.464240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.464250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.464538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.464549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.464871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.465212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.465224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.465539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.465552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.465673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.465684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.465967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.465978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.466295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.466306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.466621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.466631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.466965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.646 [2024-07-15 14:16:34.466976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.646 qpair failed and we were unable to recover it. 00:30:36.646 [2024-07-15 14:16:34.467285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.467296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.467615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.467625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.467938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.467949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.468285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.468296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.468652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.468662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.468998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.469009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.469329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.469340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.469649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.469660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.470000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.470011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.470335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.470346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.470673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.470683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.470917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.470927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.471245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.471256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.471568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.471579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.471817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.471828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.472141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.472472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.472482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.472806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.472817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.473149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.473160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.473501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.473512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.473830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.473842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.474183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.474193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.474512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.474524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.474711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.474722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.475028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.475039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.475359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.475694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.475704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.475996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.476007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.476326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.476337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.476661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.476671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.476995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.477007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.477348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.477358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.477631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.477642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.477986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.477997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.478318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.478328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.478643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.478656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.478840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.478852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.479185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.479196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.479515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.479525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.479853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.479864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.480225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.480235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.647 qpair failed and we were unable to recover it. 00:30:36.647 [2024-07-15 14:16:34.480546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.647 [2024-07-15 14:16:34.480556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.480877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.480887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.481238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.481249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.481592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.481602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.481927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.481938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.482260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.482271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.482500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.482511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.482822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.482833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.483157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.483168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.483480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.483491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.483867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.483877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.484089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.484099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.484417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.484428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.484764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.484775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.485099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.485110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.485434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.485445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.485766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.485778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.486007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.486018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.486349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.486359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.486694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.486913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.486924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.487248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.487258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.487597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.487607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.487930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.487940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.488208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.488218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.488545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.488556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.488907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.488918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.489232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.489242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.489551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.489561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.489882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.489893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.490213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.490223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.490542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.490553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.490873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.490884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.491203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.491214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.491556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.491567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.491909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.491920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.492261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.492272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.492614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.492624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.492943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.492954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.493324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.493334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.493666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.493676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.648 [2024-07-15 14:16:34.494058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.648 [2024-07-15 14:16:34.494068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.648 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.494362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.494373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.494684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.494694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.494853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.494866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.495192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.495203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.495549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.495559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.495880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.495891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.496211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.496221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.496556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.496567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.496899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.496910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.497249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.497259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.497542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.497554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.497873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.497884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.498230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.498241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.498556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.498566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.498928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.498939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.499259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.499269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.499583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.499593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.499806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.499817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.500110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.500120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.500450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.500460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.500799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.500811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.501145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.501155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.501519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.501529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.501714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.501725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.502037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.502047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.502372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.502383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.502700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.502711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.503033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.503044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.503232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.503244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.503581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.503592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.503914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.503925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.504274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.504285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.504624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.504634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.504869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.504880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.505201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.505212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.505536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.505547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.505901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.505912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.506254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.506265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.506603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.506615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.506932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.506943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.507243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.649 [2024-07-15 14:16:34.507254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.649 qpair failed and we were unable to recover it. 00:30:36.649 [2024-07-15 14:16:34.507584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.507595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.507908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.507919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.508241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.508253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.508598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.508608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.508895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.508906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.509241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.509252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.509572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.509583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.509930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.509941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.510252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.510263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.510585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.510596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.510886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.510897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.511962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.511986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.512308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.512320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.512603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.512614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.512933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.512945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.513292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.513304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.513624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.513635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.513946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.514273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.514284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.514470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.514481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.514821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.514836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.515027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.515312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.515322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.515661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.515671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.515884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.515895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.516222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.516233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.516572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.516583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.516903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.517219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.517229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.517548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.517559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.517878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.517889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.518231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.518242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.518563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.518573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.518919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.518930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.650 [2024-07-15 14:16:34.519269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.650 [2024-07-15 14:16:34.519280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.650 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.519613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.519624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.519991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.520003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.520296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.520307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.520631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.520641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.520952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.520963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.521273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.521283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.521592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.521603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.521907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.521919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.522266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.522276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.522591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.522601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.522924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.523252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.523563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.523577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.523906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.524241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.524252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.524575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.524586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.524910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.524921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.525259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.525270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.525458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.525469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.525794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.525805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.526115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.526126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.526471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.526483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.526806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.526818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.527137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.527147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.527487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.527497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.527693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.527703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.528015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.528026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.528363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.528374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.528688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.528698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.529015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.529025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.529333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.529344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.529570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.529581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.529934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.529946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.530265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.530276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.530601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.530611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.530931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.530942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.531333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.531344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.531642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.531653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.531986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.532318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.532329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.532681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.532692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.651 qpair failed and we were unable to recover it. 00:30:36.651 [2024-07-15 14:16:34.533010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.651 [2024-07-15 14:16:34.533022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.533343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.533354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.533668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.533679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.534012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.534023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.534341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.534351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.534693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.534704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.535051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.535062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.535387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.535398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.535735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.535746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.536045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.536384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.536398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.536713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.536722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.537110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.537123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.537432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.537442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.537785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.537795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.538096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.538106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.538296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.538307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.538615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.538871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.538882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.539201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.539211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.539505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.539515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.539859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.539869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.540183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.540193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.540488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.540497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.540689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.540700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.541035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.541045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.541367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.541377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.541700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.541709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.542028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.542037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.542332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.542341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.542689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.542698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.543012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.543022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.543249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.543259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.543588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.543598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.543937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.543948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.544236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.544246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.544469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.544479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.544820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.544830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.545169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.545178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.545488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.545500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.545804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.545814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.545997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.652 [2024-07-15 14:16:34.546008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.652 qpair failed and we were unable to recover it. 00:30:36.652 [2024-07-15 14:16:34.546296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.546306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.546601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.546610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.546964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.546974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.547181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.547192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.547482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.547492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.547779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.547789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.548025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.548034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.548352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.548362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.548678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.548688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.549018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.549028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.549345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.549355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.549710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.549721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.550052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.550062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.550350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.550360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.550677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.550687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.550871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.550881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.551211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.551220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.551557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.551566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.551875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.551885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.552069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.552079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.552450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.552460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.552763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.552773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.553068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.553078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.553270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.553281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.553565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.553575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.553908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.553918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.554132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.554143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.554449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.554458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.554776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.554786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.555116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.555125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.555464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.555476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.555706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.555715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.556041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.556053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.556378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.556388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.556700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.556710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.557061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.557072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.557403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.557413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.557731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.557740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.558054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.558067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.558409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.558418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.558720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.558729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.653 qpair failed and we were unable to recover it. 00:30:36.653 [2024-07-15 14:16:34.559094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.653 [2024-07-15 14:16:34.559104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.559445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.559454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.559791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.559801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.560000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.560009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.560294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.560304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.560515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.560525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.560755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.560766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.561163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.561174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.561494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.561503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.561818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.561829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.562165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.562174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.562496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.562505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.562824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.562834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.563034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.563046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.563289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.563299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.563610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.563619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.563933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.563944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.564112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.564123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.564443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.564453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.564792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.564802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.565106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.565116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.565447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.565457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.565633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.565644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.565872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.565881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.566165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.566177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.566521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.566530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.654 [2024-07-15 14:16:34.566920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.654 [2024-07-15 14:16:34.566930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.654 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.567258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.567267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.567582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.567592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.567889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.567899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.568218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.568228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.568538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.568548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.568869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.568879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.569203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.569213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.569547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.569556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.569871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.569881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.570210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.570219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.570575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.570584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.570961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.570972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.571276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.571286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.571616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.571626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.571945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.571955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.572055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.572064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.572287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.572297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.572538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.572548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.572873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.573266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.655 [2024-07-15 14:16:34.573275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.655 qpair failed and we were unable to recover it. 00:30:36.655 [2024-07-15 14:16:34.573587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.573597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.573826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.573837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.574177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.574187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.574529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.574538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.574843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.574854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.575172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.575393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.575402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.575735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.575744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.576128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.576138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.576476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.576485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.576705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.576715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.577067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.577077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.577352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.577362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.577658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.577669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.578206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.578216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.578528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.578538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.578848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.579194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.579203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.579502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.579513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.579733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.579742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.580085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.580095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.580414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.580424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.580740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.580749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.581050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.581060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.581367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.581377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.581770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.581780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.582093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.582103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.582415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.582424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.582724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.582734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.583104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.583114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.583424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.583434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.583772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.583782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.584112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.584122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.584437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.584447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.584750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.584765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.585068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.585078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.585387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.585398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.585613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.585624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.585930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.585941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.586238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.586248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.586601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.586610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.656 [2024-07-15 14:16:34.586903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.656 [2024-07-15 14:16:34.586913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.656 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.587240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.587250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.587552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.587563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.587925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.587936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.588263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.588273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.588588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.588598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.588926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.588935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.589273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.589283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.589455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.589466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.589768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.589779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.590073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.590083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.590415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.590425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.590765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.590776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.591097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.591107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.591451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.591460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.591797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.591807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.592128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.592137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.592537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.592547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.592833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.592843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.593150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.593489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.593499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.593819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.593829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.594149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.594158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.594364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.594375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.594678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.594688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.594861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.594871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.595177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.595186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.595492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.595501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.595818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.595828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.596150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.596160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.596503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.596512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.596823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.597075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.597086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.597395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.597406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.597749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.597765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.598072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.598081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.598377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.598386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.598692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.598701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.599066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.599076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.599415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.599424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.599787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.599797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.600132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.657 [2024-07-15 14:16:34.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.657 qpair failed and we were unable to recover it. 00:30:36.657 [2024-07-15 14:16:34.600441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.600452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.600792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.600802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.601017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.601026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.601081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.601093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.601389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.601399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.601698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.601707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.602051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.602061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.602397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.602407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.602802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.602812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.603099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.603109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.603460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.603470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.603836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.603846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.604162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.604172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.604484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.604494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.604788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.604799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.605129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.605138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.605476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.605485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.605820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.605830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.606066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.606076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.606391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.606401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.606719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.606730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.607048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.607058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.607388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.607398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.607719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.607728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.608044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.608054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.608435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.608445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.608809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.608820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.609024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.609033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.609312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.609321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.609733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.609743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.610061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.610072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.610457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.610467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.610800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.610810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.611020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.611029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.611375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.611389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.611708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.611717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.611910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.611922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.612244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.612253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.612581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.612591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.612870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.612880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.613189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.613199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.658 [2024-07-15 14:16:34.613595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.658 [2024-07-15 14:16:34.613604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.658 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.613935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.613946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.614147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.614156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.614496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.614506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.614814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.614824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.615147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.615157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.615447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.615457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.615810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.615820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.616153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.616163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.616482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.616492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.616671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.616681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.617041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.617051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.617368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.617377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.617696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.617705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.617894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.617904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.618265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.618275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.618661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.618670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.618863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.618876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.619169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.619179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.619506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.619516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.619873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.619883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.620229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.620238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.620564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.620573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.620940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.620950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.621287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.621296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.621581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.621591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.621909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.621919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.622208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.622217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.622569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.622578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.622918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.623263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.623275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.623601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.623938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.623948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.624296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.624305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.624654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.624663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.625017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.625027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.659 [2024-07-15 14:16:34.625336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.659 [2024-07-15 14:16:34.625345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.659 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.625649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.625658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.625983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.625993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.626370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.626380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.626692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.626701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.627030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.627039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.627369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.627378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.627713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.627722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.628064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.628074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.628414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.628423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.628759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.628769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.629113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.629471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.629481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.629832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.629842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.630216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.630226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.630572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.630581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.630915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.630925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.631274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.631283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.631623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.631633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.631947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.631957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.632328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.632337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.632555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.632564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.632932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.632942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.633248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.633257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.633621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.633630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.633945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.633955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.634290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.634299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.634628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.634637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.634970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.634980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.635319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.635328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.635625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.635634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.635933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.635943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.636280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.636290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.636640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.636650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.637002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.637012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.637356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.637368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.637709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.637719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.637922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.637932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.638245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.638255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.638574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.660 [2024-07-15 14:16:34.638584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.660 qpair failed and we were unable to recover it. 00:30:36.660 [2024-07-15 14:16:34.638910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.638920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.639262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.639271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.639634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.639643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.639981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.640318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.640329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.640553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.640563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.640941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.640951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.641304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.641314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.641637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.641646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.642016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.642026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.642339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.642348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.642669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.642678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.643038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.643049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.643347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.643357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.643710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.643719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.644056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.644066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.644371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.644676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.644685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.644928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.644938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.645265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.645274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.645627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.645636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.645959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.645968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.646170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.646182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.646495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.646505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.646842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.646853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.647196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.647205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.647526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.647535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.647874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.647884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.648226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.648236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.648579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.648915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.648926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.649274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.649283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.649641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.649650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.649862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.649872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.650245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.650255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.650568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.650578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.650921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.650931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.651269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.651279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.651619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.651629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.651976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.651987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.652314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.652323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.661 [2024-07-15 14:16:34.652657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.661 [2024-07-15 14:16:34.652666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.661 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.653031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.653041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.653234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.653244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.653561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.653571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.653917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.653926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.654259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.654268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.654607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.654617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.654965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.654975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.655298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.655308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.655634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.655644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.655885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.655895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.656241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.656251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.656584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.656593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.656814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.656824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.657142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.657151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.657494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.657504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.657843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.657854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.658205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.658214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.658539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.658548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.658883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.658893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.659254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.659263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.659548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.659557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.659917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.659929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.660266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.660275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.660625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.660635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.661020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.661029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.661368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.661377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.661762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.661772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.662134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.662144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.662341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.662352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.662717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.662726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.663134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.663143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.663499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.663508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.663821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.663831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.664143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.664494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.664504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.664855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.664865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.665032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.665041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.665241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.665250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.665568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.665578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.665994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.666004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.666323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.662 [2024-07-15 14:16:34.666333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.662 qpair failed and we were unable to recover it. 00:30:36.662 [2024-07-15 14:16:34.666669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.666678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.667030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.667040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.667372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.667382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.667723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.668093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.668102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.668429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.668438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.668787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.668796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.669125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.669137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.669461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.669471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.669809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.670091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.670100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.670468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.670477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.670823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.670832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.671162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.671171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.671518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.671827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.672194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.672204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.672525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.672535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.672866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.672876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.673253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.673263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.673579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.673589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.673897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.673907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.674247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.674256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.674594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.674603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.674947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.674957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.675274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.675284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.675618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.675628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.675968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.675978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.676327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.676336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.676678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.676687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.677064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.677074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.677388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.677397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.677698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.677707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.678087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.678097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.678442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.678451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.678756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.678766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.679105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.679114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.679452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.679461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.679810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.679820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.663 [2024-07-15 14:16:34.680170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.663 [2024-07-15 14:16:34.680180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.663 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.680558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.680567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.680885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.680895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.681267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.681276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.681614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.681976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.681986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.682315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.682324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.682669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.682679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.683011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.683021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.683333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.683345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.683683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.683692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.683971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.683982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.684344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.684353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.684676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.684685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.685009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.685019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.685341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.685350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.685697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.685706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.686099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.686108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.686472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.686830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.686840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.687159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.687169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.687493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.687502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.687844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.687855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.688182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.688191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.688521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.688531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.688869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.688879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.689244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.689253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.689599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.689608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.689913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.689923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.690262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.690271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.690638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.690648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.691029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.691039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.691356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.691365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.691681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.691691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.692030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.692040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.692376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.692386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.692710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.664 [2024-07-15 14:16:34.692722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.664 qpair failed and we were unable to recover it. 00:30:36.664 [2024-07-15 14:16:34.693066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.693076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.693409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.693419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.693769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.694113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.694122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.694412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.694422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.694757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.694766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.695117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.695127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.695321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.695331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.695628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.695638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.695961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.695971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.696309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.696319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.696657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.696667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.697035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.697045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.697388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.697398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.697695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.697704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.698040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.698050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.698396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.698405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.698759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.698769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.698940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.698949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.699233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.699243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.699537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.699879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.699889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.700211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.700220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.700515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.700524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.700865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.700875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.701175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.701185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.701524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.701533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.701845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.701855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.702060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.702070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.702380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.702390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.702687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.702696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.703052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.703061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.703365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.703374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.703710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.703719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.704065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.704075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.704418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.704428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.704744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.704760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.705088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.705097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.705393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.705402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.705720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.705730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.706100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.665 [2024-07-15 14:16:34.706113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.665 qpair failed and we were unable to recover it. 00:30:36.665 [2024-07-15 14:16:34.706439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.706448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.706873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.706883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.707327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.707336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.707678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.707688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.708046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.708057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.708378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.708387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.708728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.708737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.709118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.709127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.709471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.709480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.709818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.709828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.710181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.710190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.710523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.710533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.710876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.710886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.711189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.711198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.711529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.711538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.711858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.711868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.712211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.712221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.712551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.712560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.712899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.712909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.713247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.713256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.713598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.713607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.713899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.713909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.714250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.714260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.714600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.714609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.714955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.714965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.715306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.715315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.715640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.715649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.715982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.715993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.716340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.716350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.716694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.716704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.717045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.717055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.717395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.717405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.717759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.717770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.718140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.718150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.718327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.718338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.718759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.718769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.719093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.719102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.719439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.719448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.719796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.719806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.720129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.720139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.720332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.666 [2024-07-15 14:16:34.720343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.666 qpair failed and we were unable to recover it. 00:30:36.666 [2024-07-15 14:16:34.720650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.720659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.720986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.720996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.721332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.721341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.721670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.721680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.721950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.721959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.722258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.722268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.722614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.722623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.722918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.722927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.723281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.723290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.723607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.723616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.723926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.723936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.724290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.724300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.724629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.724638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.725017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.725027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.725339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.725349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.725689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.725699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.726041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.726050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.726395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.726405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.726745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.726759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.727103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.727113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.727454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.727463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.727788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.727799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.728160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.728169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.728515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.728525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.728827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.728836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.729132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.729141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.729499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.729512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.729874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.729884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.730247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.730257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.730572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.730582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.730919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.730929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.731271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.731280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.731624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.731634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.732046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.732056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.732362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.732371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.732729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.732738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.733088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.733097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.733411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.733420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.733761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.733771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.734113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.734123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.667 [2024-07-15 14:16:34.734353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.667 [2024-07-15 14:16:34.734362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.667 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.734595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.734604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.734923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.734933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.735289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.735299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.735641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.735651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.736021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.736335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.736345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.736682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.736691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.737043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.737052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.737370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.737379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.737720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.737729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.738068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.738078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.738420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.738430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.738712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.738721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.739062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.739072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.739404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.739760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.739770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.740149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.740159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.740513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.740522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.740862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.740872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.741227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.741236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.741434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.741445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.741783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.741793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.742105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.742114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.742496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.742505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.742837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.742847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.743187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.743196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.743547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.743559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.743893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.743903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.744245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.744255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.744596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.744605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.744997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.745007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.745364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.745373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.745671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.745680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.746002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.746012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.746336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.746345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.668 [2024-07-15 14:16:34.746683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.668 [2024-07-15 14:16:34.746693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.668 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.747056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.747065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.747403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.747412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.747763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.747774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.748111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.748120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.748356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.748365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.748732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.748742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.748939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.748949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.749231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.749533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.749542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.749900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.749909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.669 [2024-07-15 14:16:34.750242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.669 [2024-07-15 14:16:34.750251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.669 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.750582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.750593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.750922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.750933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.751289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.751614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.751624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.751910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.751920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.752241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.752250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.752579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.752592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.752920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.752929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.753985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.754009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.754349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.754359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.754556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.754567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.754861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.938 [2024-07-15 14:16:34.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.938 qpair failed and we were unable to recover it. 00:30:36.938 [2024-07-15 14:16:34.755172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.755181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.755516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.755526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.755687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.755697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.756043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.756054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.756458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.756468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.756813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.756823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.757166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.757175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.757472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.757482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.757817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.757827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.758174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.758184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.758523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.758532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.758900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.758909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.759239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.759248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.759580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.759589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.759907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.759917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.760267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.760277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.760565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.760575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.760916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.760926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.761215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.761224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.761567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.761576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.761904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.761914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.762259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.762268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.762632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.762641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.762984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.762994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.763333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.763343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.763696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.763705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.763898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.763908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.764244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.764253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.764490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.764499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.764787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.764798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.765128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.765138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.765543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.765552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.765896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.765906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.766246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.766256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.766578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.766587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.766915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.766928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.767291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.767300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.767676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.767686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.767918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.767929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.939 qpair failed and we were unable to recover it. 00:30:36.939 [2024-07-15 14:16:34.768142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.939 [2024-07-15 14:16:34.768151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.768425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.768434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.768742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.768757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.769128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.769138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.769446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.769456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.769776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.769786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.770078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.770087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.770428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.770437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.770797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.770807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.771105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.771115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.771469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.771479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.771817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.771827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.772155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.772165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.772521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.772530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.772848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.772858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.773192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.773202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.773528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.773538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.773739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.773749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.774093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.774102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.774422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.774431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.774771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.774781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.775131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.775140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.775467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.775477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.775814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.775826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.776176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.776186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.776378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.776388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.776685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.776694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.777058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.777067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.777389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.777399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.777733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.777743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.778143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.778153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.778472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.778481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.778824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.778834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.779189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.779198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.779524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.779534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.779870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.779879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.780210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.780220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.780554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.780563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.780905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.780915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.781256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.781265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.940 qpair failed and we were unable to recover it. 00:30:36.940 [2024-07-15 14:16:34.781604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.940 [2024-07-15 14:16:34.781613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.781808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.781819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.782141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.782150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.782384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.782394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.782613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.782622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.782994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.783004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.783321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.783330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.783665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.783674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.784000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.784010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.784306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.784316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.784618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.784628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.784990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.785000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.785351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.785360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.785699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.785709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.786042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.786051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.786387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.786397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.786734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.786743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.787089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.787099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.787435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.787795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.787805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.788039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.788048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.788409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.788419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.788765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.788776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.789116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.789126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.789473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.789485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.789822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.789832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.790177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.790186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.790526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.790535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.790875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.790885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.791299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.791309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.791631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.791640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.791827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.791838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.792186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.792196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.792541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.792550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.792886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.792897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.793138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.793147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.793459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.793469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.793791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.793801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.794127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.794137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.794472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.794481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.941 [2024-07-15 14:16:34.794832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.941 [2024-07-15 14:16:34.794842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.941 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.795193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.795202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.795524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.795534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.795873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.795883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.796191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.796201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.796492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.796501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.796863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.797154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.797163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.797482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.797491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.797834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.797844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.798200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.798210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.798525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.798536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.798870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.798879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.799229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.799238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.799558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.799569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.799890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.799900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.800268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.800277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.800627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.800636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.800988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.800998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.801317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.801326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.801675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.801684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.801881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.801891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.802218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.802531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.802540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.802882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.802892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.803233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.803243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.803585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.803595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.803930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.803940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.804290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.804300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.804711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.804721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.805076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.805086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.805389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.805398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.805737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.805746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.806135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.806145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.806448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.806458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.806791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.806801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.806997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.807006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.807300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.807309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.807679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.807688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.808080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.808091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.942 qpair failed and we were unable to recover it. 00:30:36.942 [2024-07-15 14:16:34.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.942 [2024-07-15 14:16:34.808428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.808774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.808784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.809014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.809023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.809339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.809348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.809669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.809679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.809991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.810001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.810339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.810348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.810563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.810573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.810892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.810902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.811239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.811248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.811570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.811579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.811956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.811966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.812302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.812315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.812653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.812662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.813590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.813611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.813929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.813940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.814291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.814301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.814657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.814666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.815029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.815039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.815254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.815263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.815610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.815620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.815982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.815993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.816405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.816414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.816689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.816698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.817017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.817027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.817363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.817372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.817707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.817717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.818067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.818077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.818401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.818410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.818770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.818780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.819119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.819129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.819466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.819475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.819816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.943 [2024-07-15 14:16:34.819827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.943 qpair failed and we were unable to recover it. 00:30:36.943 [2024-07-15 14:16:34.820189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.820199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.820518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.820527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.820879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.820888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.821285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.821295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.821644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.821653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.822039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.822049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.822419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.822429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.822762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.822772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.823012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.823022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.823344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.823353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.823673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.823682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.824011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.824021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.824372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.824382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.824709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.824719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.825133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.825143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.825498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.825508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.825800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.825810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.826017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.826395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.826406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.826736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.826745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.826946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.826957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.827207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.827216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.827530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.827539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.827938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.827948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.828286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.828515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.828525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.828826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.828836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.829238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.829248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.829570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.829579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.829908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.829918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.830249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.830259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.830608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.831360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.831380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.831714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.831725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.832067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.832077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.832414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.832424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.832627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.832638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.832949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.832959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.833311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.833684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.833694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.944 [2024-07-15 14:16:34.834062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.944 [2024-07-15 14:16:34.834072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.944 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.834502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.834867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.834877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.835219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.835230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.835348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.835358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.835762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.835772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.836095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.836105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.836448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.836460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.836797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.836808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.837159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.837170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.837540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.837550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.837960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.837970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.838229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.838239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.838547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.838556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.838792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.838802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.839027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.839037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.839275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.839285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.839664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.839674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.840008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.840019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.840371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.840381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.840686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.840696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.841095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.841105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.841328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.841337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.841686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.841695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.842063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.842073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.842310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.842668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.842677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.842926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.842935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.843295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.843304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.843487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.843497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.843840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.843851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.844214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.844224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.844552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.844562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.844933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.844943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.845292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.845302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.845664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.845674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.845874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.845884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.846225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.846235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.846440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.846450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.846763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.846773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.847099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.847108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.847476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.945 [2024-07-15 14:16:34.847486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.945 qpair failed and we were unable to recover it. 00:30:36.945 [2024-07-15 14:16:34.847838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.847847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.848177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.848187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.848517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.848526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.848869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.848879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.849235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.849245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.849540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.849550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.849888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.849899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.850210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.850220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1573298 Killed "${NVMF_APP[@]}" "$@" 00:30:36.946 [2024-07-15 14:16:34.850559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.850569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.850908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.850918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:36.946 [2024-07-15 14:16:34.851263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.851272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:36.946 [2024-07-15 14:16:34.851608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.851618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:36.946 [2024-07-15 14:16:34.851980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.851991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:36.946 [2024-07-15 14:16:34.852245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.852254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.946 [2024-07-15 14:16:34.852571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.852581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.852809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.852819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.852996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.853006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.853204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.853214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.853556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.853566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.853854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.853864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.854224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.854233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.854571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.854581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.854925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.854935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.855290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.855299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.855614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.855623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.855854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.855864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.856172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.856181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.856504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.856806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.856815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.857048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.857058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.857417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.857427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.857768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.857778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.858120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.858130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.858363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.858373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.858625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.858635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.858982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.858992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.859337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.859347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 [2024-07-15 14:16:34.859685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.946 [2024-07-15 14:16:34.859695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.946 qpair failed and we were unable to recover it. 00:30:36.946 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1574195 00:30:36.946 [2024-07-15 14:16:34.859939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.859950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.860147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.860157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1574195 00:30:36.947 [2024-07-15 14:16:34.860498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.860508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:36.947 addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1574195 ']' 00:30:36.947 [2024-07-15 14:16:34.860735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.860745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.947 [2024-07-15 14:16:34.860847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.860857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:36.947 [2024-07-15 14:16:34.861217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.861227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:36.947 [2024-07-15 14:16:34.861553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.861563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 14:16:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.947 [2024-07-15 14:16:34.861806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.861816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.862236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.862246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.862498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.862508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.862873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.862882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.863224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.863234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.863579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.863589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.863911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.863922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.864239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.864248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.864489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.864499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.864812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.864822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.865204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.865214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.865540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.865550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.865886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.865898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.866229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.866239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.866560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.866571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.866661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.866670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 [2024-07-15 14:16:34.867110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.947 [2024-07-15 14:16:34.867196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:36.947 qpair failed and we were unable to recover it. 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Write completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.947 Read completed with error (sct=0, sc=8) 00:30:36.947 starting I/O failed 00:30:36.948 Write completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Write completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Write completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Read completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Read completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Write completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Read completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 Read completed with error (sct=0, sc=8) 00:30:36.948 starting I/O failed 00:30:36.948 [2024-07-15 14:16:34.867448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.948 [2024-07-15 14:16:34.867791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.867803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.868026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.868034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.868403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.868410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.868719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.868727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.869016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.869025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.869323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.869330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.869661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.869669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.870003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.870011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.870392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.870400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.870636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.870643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.871036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.871044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.871233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.871241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.871561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.871568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.871878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.871886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.872216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.872223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.872422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.872430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.872756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.872764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.873020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.873028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.873355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.873362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.873713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.873720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.873829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.873839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.874251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.874260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.874581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.874588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.874924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.874933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.875237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.875245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.875579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.875586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.875819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.875827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.876143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.876150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.876466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.876474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.876801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.877161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.877168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.877475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.877483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.877826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.877834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.878182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.878189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.878520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.878527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.878870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.878878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.879208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.948 [2024-07-15 14:16:34.879215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.948 qpair failed and we were unable to recover it. 00:30:36.948 [2024-07-15 14:16:34.879557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.879564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.879910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.879920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.880264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.880272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.880593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.880601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.881003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.881011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.881359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.881367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.881666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.881674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.881973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.881980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.882299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.882306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.882603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.882610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.882932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.882940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.883258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.883265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.883592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.883599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.883923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.883930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.884213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.884219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.884425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.884431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.884552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.884559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.884768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.884775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.884973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.884980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.885288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.885295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.885602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.885610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.885933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.885940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.886266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.886272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.886593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.886599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.886931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.886937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.887106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.887114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.887439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.887446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.887757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.887764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.888090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.888098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.888424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.888431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.888738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.888744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.889046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.889054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.889227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.889234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.889308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.889316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.889648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.889655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.889957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.889964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.890127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.890134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.890417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.890424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.890615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.890623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.890794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.890802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.949 [2024-07-15 14:16:34.891068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.949 [2024-07-15 14:16:34.891075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.949 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.891372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.891380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.891676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.891683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.892045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.892053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.892361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.892368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.892692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.892698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.893101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.893108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.893427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.893434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.893796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.893804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.894030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.894037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.894374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.894381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.894755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.894763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.895081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.895087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.895436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.895443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.895776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.895784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.895999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.896007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.896365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.896373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.896707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.897031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.897039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.897395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.897401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.897714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.897721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.898047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.898054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.898387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.898393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.898707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.898714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.899026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.899034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.899372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.899380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.899711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.899718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.899826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.899833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.900178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.900185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.900575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.900582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.900782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.900790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.901081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.901088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.901405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.901412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.901703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.901710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.902012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.902297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.902304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.902634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.902640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.903018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.903025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.903414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.903421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.903737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.903743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.903971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.903978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.950 qpair failed and we were unable to recover it. 00:30:36.950 [2024-07-15 14:16:34.904311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.950 [2024-07-15 14:16:34.904321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.904650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.904657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.905065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.905072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.905252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.905548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.905555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.905782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.905789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.906088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.906095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.906420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.906427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.906732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.906739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.907050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.907361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.907367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.907694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.907701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.908093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.908101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.908403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.908410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.908740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.908747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.908948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.908955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.909266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.909272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.909618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.909625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.909931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.909938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.910266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.910272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.910574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.910580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.910907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.910914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.911115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.911123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.911452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.911459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.911645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.911653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.911979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.911986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.912393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.912400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.912728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.912735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.912959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.912967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.913300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.913308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.913409] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:36.951 [2024-07-15 14:16:34.913461] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.951 [2024-07-15 14:16:34.913612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.913620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.913962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.913969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.914178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.914185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.914506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.914514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.914858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.951 [2024-07-15 14:16:34.914865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.951 qpair failed and we were unable to recover it. 00:30:36.951 [2024-07-15 14:16:34.915186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.915193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.915514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.915522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.915836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.915843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.916060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.916067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.916447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.916455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.916774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.916782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.917124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.917131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.917487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.917494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.917809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.917818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.918137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.918144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.918199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.918206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.918464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.918472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.918800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.918809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.919095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.919102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.919447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.919454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.919783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.919791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.920005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.920338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.920348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.920693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.920700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.921024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.921032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.921384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.921392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.921715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.921722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.922053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.922062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.922465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.922473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.922790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.922797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.923086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.923093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.923409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.923417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.923737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.923744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.924091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.924100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.924436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.924443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.924792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.924800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.925131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.925140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.925300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.925308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.925647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.926007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.926015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.926368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.926376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.926701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.926710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.926949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.926957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.927249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.927257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.927442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.927450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.952 [2024-07-15 14:16:34.927805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.952 [2024-07-15 14:16:34.927813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.952 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.928018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.928026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.928350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.928357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.928538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.928546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.928895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.928903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.929103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.929111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.929442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.929450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.929776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.929784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.930114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.930121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.930509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.930516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.930867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.930875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.931199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.931206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.931584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.931591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.931887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.931894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.932217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.932223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.932510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.932517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.932834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.932842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.933013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.933022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.933339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.933347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.933675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.933682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.934000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.934008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.934203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.934210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.934499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.934506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.934826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.934834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.935139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.935146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.935453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.935459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.935790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.935798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.936099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.936107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.936297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.936304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.936595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.936601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.936904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.936912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.937240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.937247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.937577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.937586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.937916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.937923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.938249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.938256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.938564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.938570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.938885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.938893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.939188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.939196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.939509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.939517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.939877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.939884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.940087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.940101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.953 [2024-07-15 14:16:34.940480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.953 [2024-07-15 14:16:34.940487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.953 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.940810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.940817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.941150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.941157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.941473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.941479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.941823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.942003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.942011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.942409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.942416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.942758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.942766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.943052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.943059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.943412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.943419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.943738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.943746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.943944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.943952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.944247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.944497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.944504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.944719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.944727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.945070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.945078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.945389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.945755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.945763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.945928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.945935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.946300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.946307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.946626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.946633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.946946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.946953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.947355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.947362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.947531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.947539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.947872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.947879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.948209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.948216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.948514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.948521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.948713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.948721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.949105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.949112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.949288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.949296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.949595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.949602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.954 [2024-07-15 14:16:34.949994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.950002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.950323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.950331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.950681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.950689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.951006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.951013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.951399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.951590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.951597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.951953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.951960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.952282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.952289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.952496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.952504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.952831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.954 [2024-07-15 14:16:34.952838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.954 qpair failed and we were unable to recover it. 00:30:36.954 [2024-07-15 14:16:34.953179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.953186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.953504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.953510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.953842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.953850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.954196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.954202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.954518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.954526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.954711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.954719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.955100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.955108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.955340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.955347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.955573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.955580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.955756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.955763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.956079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.956086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.956264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.956271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.956560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.956567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.956898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.956905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.957221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.957229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.957517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.957526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.957743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.957750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.958151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.958157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.958465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.958472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.958696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.958703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.958942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.958949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.959156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.959163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.959472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.959479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.959875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.959882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.960174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.960180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.960517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.960524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.960739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.960746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.960981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.960988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.961151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.961158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.961528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.961535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.961850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.961857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.962191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.962198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.962511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.962517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.962828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.962835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.963143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.963368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.963375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.963671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.963678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.963878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.963885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.955 [2024-07-15 14:16:34.964232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.955 [2024-07-15 14:16:34.964239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.955 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.964410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.964416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.964799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.964806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.964962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.964969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.965309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.965316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.965646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.965653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.965903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.965910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.966224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.966231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.966562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.966568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.966898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.966905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.967131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.967521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.967528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.967721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.967729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.968064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.968071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.968252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.968259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.968451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.968458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.968754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.968761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.969082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.969091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.969414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.969420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.969794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.969801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.970193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.970200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.970520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.970527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.970796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.970803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.971014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.971022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.971410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.971417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.971759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.971767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.972088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.972095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.972440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.972447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.972763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.972770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.973043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.973049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.973366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.973372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.973683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.973690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.974025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.974032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.974336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.974343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.974669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.974676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.975006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.975014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.975215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.975222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.975470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.975476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.975810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.975818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.956 qpair failed and we were unable to recover it. 00:30:36.956 [2024-07-15 14:16:34.975981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.956 [2024-07-15 14:16:34.975988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.976275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.976282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.976581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.976587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.976918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.976924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.977127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.977474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.977481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.977788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.977795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.978089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.978096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.978428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.978436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.978795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.978802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.979117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.979124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.979450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.979456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.979847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.979854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.980160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.980167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.980506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.980513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.980696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.980703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.981000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.981007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.981319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.981326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.981531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.981540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.981737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.981743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.982056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.982063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.982391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.982398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.982660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.982667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.982842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.982850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.983028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.983035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.983317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.983324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.983660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.983991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.983998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.984312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.984320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.984643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.984651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.984864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.984871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.985019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.985026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.985358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.985365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.985551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.985557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.985894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.985902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.986203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.986210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.986561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.986568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.986898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.986905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.987286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.987294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.987625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.987632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.987991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.987998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.988294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.988300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.988477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.988484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.988767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.988774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.957 qpair failed and we were unable to recover it. 00:30:36.957 [2024-07-15 14:16:34.989091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.957 [2024-07-15 14:16:34.989098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.989427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.989434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.989654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.989660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.989847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.989854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.990069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.990075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.990241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.990248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.990589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.990596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.990966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.990973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.991288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.991295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.991601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.991608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.991872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.991879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.992098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.992105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.992337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.992344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.992672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.992679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.992992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.993001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.993332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.993339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.993650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.993657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.993994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.994001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.994326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.994333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.994557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.994564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.994870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.994877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.995051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.995058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.995358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.995364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.995565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.995573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.995898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.995905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.996244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.996251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.996584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.996591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.996805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.996812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.997064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.997071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.997219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.997226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.997520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.997527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.997842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.997849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.998075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.998082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.998488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.998495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.998794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.998801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.999116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.999124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.999423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.999430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:34.999748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:34.999758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.000055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.000061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.000350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.000358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.000680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.000687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.000995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.001003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.001335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.001342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.001507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.001514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.001858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.001866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.002201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.002208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.002510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.958 [2024-07-15 14:16:35.002517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-07-15 14:16:35.002845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.002852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.003170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.003178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.003367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.003375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.003714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.003721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.003897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.003905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.004215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.004222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.004558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.004565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.004767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.004778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.004967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.004974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.005341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.005349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.005738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.005745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.006156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.006164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.006460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.006836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.006844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.006923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.959 [2024-07-15 14:16:35.007219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.007227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.007648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.007655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.007845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.007853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.008160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.008167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.008547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.008554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.008734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.008742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.008965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.008973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.009268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.009276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.009725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.009733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.010055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.010063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.010387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.010394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.010758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.010766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.010965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.010973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.011318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.011326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.011653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.011661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.011862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.011870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.012173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.012180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.012397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.012404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.012735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.012742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.013136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.013143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.013466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.013473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.013805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.013813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.014016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.014024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.014379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.014386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.014583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.014591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.014886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.014894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.015274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.015281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.015443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.959 [2024-07-15 14:16:35.015450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-07-15 14:16:35.015884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.015892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.016115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.016122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.016441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.016773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.016780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.017003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.017010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.017092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.017101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.017403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.017411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.017646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.017653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.017990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.017998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.018316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.018323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.018709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.018715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.019055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.019061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.019429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.019436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.019735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.019741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.019802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.019809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.019996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.020004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.020217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.020224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.020575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.020581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.020867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.020874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.021058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.021065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.021449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.021455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.021760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.021767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.022007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.022015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.022226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.022233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.022621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.022629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.022969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.022976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.023224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.023231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.023521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.023528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.023828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.023835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.024004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.024012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.024252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.024260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.024551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.024558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.024946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.024954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.025262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.025268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.025565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.025572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.025969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.025977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.026280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.026289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.026434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.026442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.026727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.026735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.027076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.027083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.027399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.027406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.027758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.027765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.028124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.028131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.028439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.028446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.028770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.028778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.028979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.028989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.029335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.029341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.029566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.960 [2024-07-15 14:16:35.029573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.960 qpair failed and we were unable to recover it. 00:30:36.960 [2024-07-15 14:16:35.029887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.029895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.030192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.030199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.030402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.030410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.030750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.030761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.031138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.031145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.031454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.031461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.031703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.032042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.032050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.032238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.032246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.032546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.032553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.032747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.032759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.033086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.033094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.033421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.033428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.033730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.034057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.034064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.034449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.034457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.034842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.034850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.035177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.035185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.035506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.035515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.035831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.035839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.036046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.036053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.036402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.036409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.036718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.036725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.037033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.037041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.037361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.037369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.037712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.037719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.038045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.038054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.038245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.038253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.038555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.038563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.038945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.038953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.039255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.039262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.039579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.039586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.039919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.040295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.040303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.040691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.040700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.040876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.040884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.041168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.041175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.041477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.041486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.041804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.041812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:36.961 [2024-07-15 14:16:35.042132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.961 [2024-07-15 14:16:35.042139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:36.961 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.042454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.042463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.042811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.042818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.043112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.043120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.043444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.043451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.043662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.043839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.043847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.044180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.044189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.044498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.044505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.045408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.045431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.045730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.045738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.046061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.046069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.046266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.046275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.046635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.046642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.046979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.046986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.047310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.047316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.047476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.047484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.047850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.047857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.048224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.048232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.048612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.048618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.049011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.049018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.049325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.049331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.049639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.049646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.049734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.049741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.049936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.049943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.050133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.050141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.050504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.050511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.050897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.051233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.051239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.051516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.051523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.051822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.051829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.051999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.052005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.052207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.052215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.052507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.052514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.052853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.052861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.053195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.053202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.053501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.053508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.053892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.053898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.054125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.233 [2024-07-15 14:16:35.054133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.233 qpair failed and we were unable to recover it. 00:30:37.233 [2024-07-15 14:16:35.054427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.054434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.054742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.054750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.055063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.055071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.055433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.055439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.055827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.055835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.056078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.056085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.056403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.056410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.056653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.056660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.056989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.056996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.057344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.057350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.057663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.057670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.057994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.058001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.058370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.058376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.058723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.058730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.059062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.059069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.059242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.059250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.059446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.059453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.059799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.059806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.060110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.060117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.060454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.060461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.060769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.060776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.061053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.061059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.061305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.061313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.061637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.061644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.061956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.061963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.062270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.062277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.062587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.062594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.062810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.062816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.063144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.063150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.063485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.063493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.063797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.063804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.064144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.064151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.064446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.064453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.064769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.064776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.065133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.065139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.065524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.065530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.065831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.065837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.066161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.066168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.066465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.066472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.066858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.066867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.067250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.067257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.067647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.067654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.067965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.067973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.068188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.068196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.068516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.068524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.068857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.069189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.069195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.069492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.069499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.069688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.069694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.069860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.069867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.070161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.070168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.070491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.070497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.070666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.070673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.070996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.071003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.071383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.071390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.071600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.071607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.071913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.071921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.072246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.072254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.072640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.072647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.072958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.072965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.073137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.073144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.073438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.073445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.073774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.073781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.073983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.073990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-07-15 14:16:35.074329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.234 [2024-07-15 14:16:35.074335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.074720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.074727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.074947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.074954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.075088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.235 [2024-07-15 14:16:35.075115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.235 [2024-07-15 14:16:35.075122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.235 [2024-07-15 14:16:35.075125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.075128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.235 [2024-07-15 14:16:35.075133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 [2024-07-15 14:16:35.075134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.075473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.075480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.075683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:37.235 [2024-07-15 14:16:35.075828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.075836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.075815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:37.235 [2024-07-15 14:16:35.075973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:37.235 [2024-07-15 14:16:35.075976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:37.235 [2024-07-15 14:16:35.076110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.076117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.076447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.076453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.076662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.077043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.077051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.077231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.077239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.077448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.077455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.077758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.077766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.078093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.078100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.078301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.078308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.078670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.078676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.079031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.079039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.079353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.079360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.079429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.079436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.079540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.079547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.079858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.079865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.080172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.080179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.080356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.080370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.080722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.080728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.081032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.081379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.081389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.081722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.081730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.082153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.082160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.082471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.082478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.082813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.082820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.083005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.083013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.083247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.083253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.083573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.083581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.083756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.083764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.084098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.084105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.084325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.084331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.084652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.084660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.084896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.084904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.085118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.085454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.085462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.085768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.085776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.086153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.086160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.086377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.086384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.086601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.086607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.086703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.086710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.087040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.087047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.087272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.087279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.087590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.087597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.087777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.087786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.087995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.088198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.088245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.088362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.088638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.088831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.088838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.089175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.089182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.089484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.089491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.089820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.089828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.090045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.090052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.090245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.090251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.090362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7800 is same with the state(5) to be set 00:30:37.235 [2024-07-15 14:16:35.090988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.235 [2024-07-15 14:16:35.091077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-07-15 14:16:35.091545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.091580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.092054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.092139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.092364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.092372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.092676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.092683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.093015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.093022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.093251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.093258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.093468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.093475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.093766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.094102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.094109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.094331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.094518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.094526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.094740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.094747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.094923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.094931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.095330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.095336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.095699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.095706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.096033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.096039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.096344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.096351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.096669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.096678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.097015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.097022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.097330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.097337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.097541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.097548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.097712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.097720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.098045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.098053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.098399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.098407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.098737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.099072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.099078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.099398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.099404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.099731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.099738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.100042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.100049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.100242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.100249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.100458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.100464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.100780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.101088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.101095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.101442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.101449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.101629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.101958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.101965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.102163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.102170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.102348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.102354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.102554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.102560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.102946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.102953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.103152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.103160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.103488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.103495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.103823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.103830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.104192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.104200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.104427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.104434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.104742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.104748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.105073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.105495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.105502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.105702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.105709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.105910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.105918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.106121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.106128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.106484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.106491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.106805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.106812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.107143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.107149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.107478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.107485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.107677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.107684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-07-15 14:16:35.107975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.236 [2024-07-15 14:16:35.107982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.108306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.108314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.108621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.108628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.108829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.108842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.109219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.109226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.109614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.109621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.109941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.109948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.109989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.109995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.110373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.110380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.110682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.110690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.110733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.110740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.111065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.111073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.111455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.111461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.111764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.111771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.112069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.112077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.112146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.112153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.112434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.112441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.112619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.112627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.112957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.112965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.113285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.113292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.113602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.113609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.113792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.113799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.114084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.114091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.114444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.114452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.114757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.114765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.115075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.115082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.115248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.115256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.115612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.115620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.115984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.115991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.116175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.116183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.116471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.116478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.116793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.116801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.116986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.116993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.117154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.117160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.117315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.117322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.117372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.117378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.117683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.117691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.117895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.117902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.118228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.118235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.118416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.118424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.118759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.118988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.118997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.119291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.119298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.119691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.119698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.120012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.120019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.120336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.120342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.120418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.120424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.120606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.120613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.120818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.120825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.121139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.121147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.121468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.121475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.121805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.121812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.122113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.122120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.122352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.122359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.122689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.122696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.122867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.122875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.123047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.123054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.123346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.123354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.123528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.123535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.123841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.123848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.124023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.124030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.124423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.124430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.124467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.124473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.124650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.124656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.124834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.124841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.125026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.237 [2024-07-15 14:16:35.125033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.237 qpair failed and we were unable to recover it. 00:30:37.237 [2024-07-15 14:16:35.125369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.125376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.125703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.125710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.126014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.126022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.126331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.126338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.126477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.126649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.126656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.126970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.126978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.127304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.127313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.127639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.127646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.128064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.128072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.128127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.128133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.128435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.128441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.128763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.128771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.129070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.129077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.129380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.129387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.129808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.129818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.130126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.130133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.130526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.130533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.130730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.130737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.131143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.131149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.131367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.131375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.131576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.131583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.131910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.131917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.132099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.132106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.132402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.132408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.132564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.132570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.132954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.132961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.133241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.133247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.133291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.133297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.133661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.133667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.133865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.133873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.134203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.134210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.134384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.134392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.134684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.134691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.135011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.135018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.135218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.135225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.135568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.135907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.135913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.136266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.136272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.136459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.136466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.136754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.136763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.136933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.136940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.137204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.137211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.137509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.137516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.137840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.137847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.138164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.138171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.138507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.138513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.138689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.138697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.139156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.139164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.139361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.139369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.139709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.139716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.140030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.140036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.140244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.140251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.140444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.140450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.140745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.140755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.141111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.141121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.141304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.141311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.141649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.141656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.238 qpair failed and we were unable to recover it. 00:30:37.238 [2024-07-15 14:16:35.142065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.238 [2024-07-15 14:16:35.142072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.142253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.142260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.142566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.142874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.142882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.143195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.143202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.143509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.143515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.143724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.143730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.144043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.144050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.144259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.144266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.144656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.144662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.144964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.144971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.145152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.145160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.145459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.145466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.145768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.145775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.146103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.146109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.146410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.146416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.146806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.146812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.147111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.147119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.147289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.147295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.147487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.147495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.147829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.147836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.148173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.148180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.148482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.148488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.148811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.148818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.149004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.149011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.149430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.149437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.149770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.149777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.150116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.150122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.150548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.150554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.150876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.150884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.151091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.151098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.151468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.151476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.151808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.151815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.152129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.152135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.152450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.152456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.152811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.152818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.153134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.153140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.153459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.153467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.153784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.154188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.154196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.154239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.154246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.154593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.154599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.154930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.154937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.155140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.155147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.155445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.155452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.155605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.155612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.155958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.155965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.156271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.156278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.156640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.156647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.156983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.156989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.157294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.157301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.157671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.157678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.158090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.158097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.158403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.158410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.158734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.158741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.159055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.159062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.159279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.159285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.159684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.159691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.159742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.159749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.159932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.159939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.160130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.160137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.160311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.160318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.160679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.160686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.160969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.160976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.161292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.161298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.161567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.239 [2024-07-15 14:16:35.161573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.239 qpair failed and we were unable to recover it. 00:30:37.239 [2024-07-15 14:16:35.161762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.161769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.162147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.162154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.162470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.162476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.162683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.162690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.162886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.162893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.163224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.163230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.163535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.163543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.163867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.163874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.164114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.164121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.164300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.164306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.164603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.164609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.164933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.164942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.165336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.165344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.165467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.165473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.165769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.165776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.166157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.166163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.166466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.166472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.166772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.166779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.166950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.166957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.167179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.167187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.167602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.167609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.167784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.167791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.168113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.168120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.168320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.168327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.168677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.168684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.168983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.168990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.169302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.169309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.169493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.169501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.169813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.169820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.170135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.170141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.170464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.170470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.170673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.170681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.170884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.170891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.171069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.171076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.171231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.171237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.171568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.171575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.171741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.171748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.171936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.171943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.172129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.172137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.172476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.172483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.172788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.172795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.173104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.173110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.173437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.173443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.173745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.173755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.173927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.173934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.174175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.174182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.174529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.174963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.174970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.175193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.175199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.175516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.175524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.175694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.175700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.176073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.176081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.176285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.176292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.176640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.176646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.176825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.176832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.177204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.177211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.177600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.177606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.177936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.177943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.178097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.178110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.178273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.178280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.178451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.178458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.178814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.178822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.179148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.179155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.179505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.179512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-07-15 14:16:35.179740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.240 [2024-07-15 14:16:35.179747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.179920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.179927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.180283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.180290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.180590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.180598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.180888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.180895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.181068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.181074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.181473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.181480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.181803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.181810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.182146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.182153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.182498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.182504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.182742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.182749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.183062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.183069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.183258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.183265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.183435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.183442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.183750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.183765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.183954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.183961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.184188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.184195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.184513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.184519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.184713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.184721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.184910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.184917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.185246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.185647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.185654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.185806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.185813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.186128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.186135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.186306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.186312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.186708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.186716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.187009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.187016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.187214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.187221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.187574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.187580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.187764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.187772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.188155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.188162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.188468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.188475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.188670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.188678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.189029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.189037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.189331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.189338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.189667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.189674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.189716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.189722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.190032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.190039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.190346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.190352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.190682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.190689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.190954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.190960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.191182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.191190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.191532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.191539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.191729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.191736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.192037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.192044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.192210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.192216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.192453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.192460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.192802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.192809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.193010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.193017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.193173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.193180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.193493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.193499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.193824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.193831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.194133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.194140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.194302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.194308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.194619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.194627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.194855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.194861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.195239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.195247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.195574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.195581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.195958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.195966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.196137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.196144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.196486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.241 [2024-07-15 14:16:35.196492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-07-15 14:16:35.196881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.196888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.197084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.197092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.197259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.197265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.197563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.197569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.197874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.197882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.198207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.198214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.198386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.198393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.198804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.198811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.199124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.199130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.199312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.199319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.199483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.199491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.199687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.199694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.199992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.199999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.200312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.200319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.200657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.200664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.200976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.200982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.201334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.201340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.201544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.201550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.201735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.201741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.202000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.202006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.202177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.202185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.202470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.202477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.202786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.202793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.203116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.203122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.203424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.203431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.203826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.203833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.204150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.204157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.204337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.204344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.204508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.204515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.204821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.204828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.205146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.205153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.205354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.205361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.205692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.205699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.205886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.206198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.206205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.206385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.206393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.206684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.206691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.207017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.207024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.207419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.207426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.207829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.207836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.208163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.208169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.208500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.208507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.208835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.208842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.209173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.209180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.209484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.209491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.209686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.209694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.209910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.209918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.210230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.210237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.210422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.210430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.210699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.210706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.211066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.211073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.211299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.211306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.211618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.211624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.211872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.211879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.212192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.212199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.212519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.212525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.212835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.212842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.212884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.212890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.213143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.213151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.213462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.213470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.213710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.213717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.213886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.213893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.214148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.214155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.214498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.214504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.214687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.214694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-07-15 14:16:35.214909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.242 [2024-07-15 14:16:35.214916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.215208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.215214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.215436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.215443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.215491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.215497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.215891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.215899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.215935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.215941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.216325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.216332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.216512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.216519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.216815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.216824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.217120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.217126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.217446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.217453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.217764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.217772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.217939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.217946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.218294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.218300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.218608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.218614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.218952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.218959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.219245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.219252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.219423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.219429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.219710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.219717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.220040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.220047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.220371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.220378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.220678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.220685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.221000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.221007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.221311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.221318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.221517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.221524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.221790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.221798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.221974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.221981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.222328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.222335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.222518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.222524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.222828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.222835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.223058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.223065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.223384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.223391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.223573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.223580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.223746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.223756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.224066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.224073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.224389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.224396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.224558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.224566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.224857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.224864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.225168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.225175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.225367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.225374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.225537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.225544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.226002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.226009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.226084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.226091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.226437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.226444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.226623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.226631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.226942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.226949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.227258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.227265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.227570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.227576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.227927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.227936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.228099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.228105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.228306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.228312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.228589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.228596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.228773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.228781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.229069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.229076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.229255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.229262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.229457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.229465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.229789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.229796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.230133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.230139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.230309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.230316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.230611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.230907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.230914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.231248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.231254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.231471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.231478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.231814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.231821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.231987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.231993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.232283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.243 [2024-07-15 14:16:35.232290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.243 qpair failed and we were unable to recover it. 00:30:37.243 [2024-07-15 14:16:35.232486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.232699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.232706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.233018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.233026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.233329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.233336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.233536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.233543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.233887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.233894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.234216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.234222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.234525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.234533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.234766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.234773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.235090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.235097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.235296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.235303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.235647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.235654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.236023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.236030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.236349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.236355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.236683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.236689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.236941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.236947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.237181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.237189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.237501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.237508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.237813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.237820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.238207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.238214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.238602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.238608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.238835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.238842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.238885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.238893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.239088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.239094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.239399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.239406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.239786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.239793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.240102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.240109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.240409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.240416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.240605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.240612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.240759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.240767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.240953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.240960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.241299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.241307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.241602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.241609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.241917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.241925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.242244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.242581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.242588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.242849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.242856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.242910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.242916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.242956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.242962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.243177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.243349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.243356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.243716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.243723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.243917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.243925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.244107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.244114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.244497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.244504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.244760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.244767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.244925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.244931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.245114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.245120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.245287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.245294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.245625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.245632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.245955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.245962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.246264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.246271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.246577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.246585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.246930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.246937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.247279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.247285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.247666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.247986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.247994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.248180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.244 [2024-07-15 14:16:35.248188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.244 qpair failed and we were unable to recover it. 00:30:37.244 [2024-07-15 14:16:35.248475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.248482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.248669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.248677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.248875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.248883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.249173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.249179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.249481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.249489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.249811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.250160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.250166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.250487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.250494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.250830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.250838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.251006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.251012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.251295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.251302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.251659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.251666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.252040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.252046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.252307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.252314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.252633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.252640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.252982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.252989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.253172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.253180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.253511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.253518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.253833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.253840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.254129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.254135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.254471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.254478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.254806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.254813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.254978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.254985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.255138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.255144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.255444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.255451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.255814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.255822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.256188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.256491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.256497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.256828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.256835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.257002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.257009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.257213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.257219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.257380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.257386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.257780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.257788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.258110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.258117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.258332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.258338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.258515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.258522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.258795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.258803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.258982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.258989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.259141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.259446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.259452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.259654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.259662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.260012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.260020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.260337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.260344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.260549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.260555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.260897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.260906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.261104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.261111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.261314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.261321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.261565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.261572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.261908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.262199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.262206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.262416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.262423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.262728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.262734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.263039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.263046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.263226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.263233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.263522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.263528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.263836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.263844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.264178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.264185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.264494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.264500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.264702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.264709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.265044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.265051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.265353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.265627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.265634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.265996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.266003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.266334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.266341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.245 [2024-07-15 14:16:35.266643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.245 [2024-07-15 14:16:35.266649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.245 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.266938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.266945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.267268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.267275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.267654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.267661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.267913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.267921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.268100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.268107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.268429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.268435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.268737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.268744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.269060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.269067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.269417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.269423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.269725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.269732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.269881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.269889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.270226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.270233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.270584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.270947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.270954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.271261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.271268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.271476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.271482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.271787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.271794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.272160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.272167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.272477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.272484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.272665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.272674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.272964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.272971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.273330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.273337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.273654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.273660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.273923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.273930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.274251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.274258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.274581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.274588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.274904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.274911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.275087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.275094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.275336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.275344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.275537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.275545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.275729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.275736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.276079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.276086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.276362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.276369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.276676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.276683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.276861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.276868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.277209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.277216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.277517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.277523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.277843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.277850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.278080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.278266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.278273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.278599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.278606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.278785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.278794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.278978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.278985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.279304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.279311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.279625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.279631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.280023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.280030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.280352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.280359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.280686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.280692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.281018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.281025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.281357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.281364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.281555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.281562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.281896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.281904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.282214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.282222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.282432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.282440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.282842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.282849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.283032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.283039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.283386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.283392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.283597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.283603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.283832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.246 [2024-07-15 14:16:35.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.246 qpair failed and we were unable to recover it. 00:30:37.246 [2024-07-15 14:16:35.284113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.284121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.284300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.284307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.284491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.284498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.284610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.284616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.285022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.285028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.285327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.285334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.285645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.285651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.285984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.285991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.286189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.286196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.286538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.286544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.286725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.286732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.286930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.286937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.287249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.287256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.287406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.287413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.287803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.287810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.287963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.287969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.288150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.288156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.288335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.288342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.288672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.288678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.289046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.289053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.289439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.289445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.289755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.289762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.290118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.290125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.290425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.290431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.290732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.290738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.291041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.291048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.291349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.291355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.291539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.291854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.291861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.292185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.292191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.292338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.292345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.292641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.292648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.292956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.292962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.293299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.293305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.293609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.293616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.293918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.293924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.294237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.294244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.294425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.294432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.294744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.294754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.295055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.295061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.295355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.295363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.295687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.295694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.295993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.296000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.296311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.296317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.296494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.296502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.296786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.296793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.296974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.296981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.297306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.297313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.297624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.297630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.297800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.297807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.298116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.298122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.298430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.298436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.298767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.298773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.298988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.298995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.299342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.299349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.299495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.299502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.299860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.299867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.300264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.300270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.300662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.300670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.300981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.301308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.301314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.301659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.301666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.301856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.301863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.302206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.302212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.302440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.247 [2024-07-15 14:16:35.302446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.247 qpair failed and we were unable to recover it. 00:30:37.247 [2024-07-15 14:16:35.302778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.302784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.303102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.303109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.303431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.303438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.303788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.303796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.304100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.304106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.304264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.304271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.304651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.304658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.304954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.305264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.305270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.305493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.305499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.305820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.305827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.306149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.306156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.306457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.306464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.306765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.306773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.307090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.307097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.307418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.307427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.307619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.307626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.307794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.307801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.308096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.308103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.308408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.308415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.308725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.308731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.308927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.308934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.309288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.309295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.309606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.309613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.309933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.309940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.310111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.310118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.310411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.310418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.310651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.310659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.310986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.310993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.311174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.311468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.311475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.311675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.311682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.312017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.312348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.312354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.312660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.312667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.312833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.312840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.313126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.313133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.313303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.313309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.313593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.313600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.313765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.313772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.313926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.313932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.314118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.314125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.314446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.314454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.314519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.314526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.314818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.314825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.314990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.314997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.315145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.315151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.315309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.315316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.315691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.315890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.315898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.316236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.316243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.316588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.316594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.316906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.316912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.317124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.317468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.317475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.317641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.317648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.318004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.318011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.318188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.318195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.318489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.318496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.318658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.318665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.319030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.319037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.319316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.319322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.319537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.319543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.319833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.320098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.320105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.320338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.248 [2024-07-15 14:16:35.320344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.248 qpair failed and we were unable to recover it. 00:30:37.248 [2024-07-15 14:16:35.320647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.320653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.320691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.320696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.321029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.321035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.321348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.321354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.321659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.321666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.321955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.321962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.322166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.322173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.322532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.322538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.322861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.322868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.323182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.323190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.323544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.323551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.323860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.323867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.324051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.324058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.324263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.324270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.324601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.324608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.324846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.324853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.325189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.325198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.325501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.325507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.325653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.325660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.325818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.325824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.326133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.326139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.326441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.326448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.326778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.326784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.327119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.327456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.327463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.327765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.327772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.328145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.328151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.328310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.328317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.328739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.328745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.329104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.329110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.329269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.329276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.329575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.329799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.329805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.330085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.330091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.330413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.330419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.330594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.330600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.331000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.331007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.331317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.331324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.331449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.331455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.331661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.331668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.331984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.331991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.332387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.332394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.332582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.332588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.332880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.332887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.333075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.333087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.333302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.333309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.333621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.333628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.333979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.333985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.334296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.334303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.334694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.334700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.335020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.335027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.249 [2024-07-15 14:16:35.335329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.249 [2024-07-15 14:16:35.335336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.249 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.335685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.335693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.336020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.336028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.336417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.336423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.336494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.336500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.336795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.336804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.337118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.337126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.337459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.337466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.337778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.337785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.338124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.338130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.338438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.338444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.338764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.338771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.339159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.339167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.339512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.339518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.339827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.339834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.340155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.340161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.340331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.340338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.340733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.340740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.341124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.341131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.341450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.341457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.341603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.341610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.342017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.342024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.524 [2024-07-15 14:16:35.342224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.524 [2024-07-15 14:16:35.342231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.524 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.342409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.342417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.342732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.342739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.342938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.342944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.343154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.343161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.343360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.343367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.343566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.343573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.343763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.343771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.343966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.343972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.344284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.344290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.344589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.344596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.344888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.344895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.345066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.345072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.345359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.345366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.345409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.345415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.345609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.345616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.345942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.345950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.346108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.346115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.346548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.346555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.346903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.346911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.347241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.347248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.347407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.347414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.347658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.347858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.347868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.348064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.348070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.348373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.348379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.348685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.348691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.349059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.349066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.349456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.349462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.349504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.349510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.349816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.525 [2024-07-15 14:16:35.349823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.525 qpair failed and we were unable to recover it. 00:30:37.525 [2024-07-15 14:16:35.350157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.350164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.350347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.350354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.350636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.350643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.350986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.350993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.351304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.351313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.351637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.351645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.352001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.352010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.352200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.352208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.352431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.352439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.352777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.352784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.352854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.352860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.353249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.353290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.353646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.353659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.353894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.353907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.354393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.354431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.354796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.354810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.355122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.355160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.355305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.355317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.355674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.355774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.356049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.356086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9894000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.356461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.356471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.356782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.356790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.356954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.356961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.357306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.357504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.357512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.357788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.357796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.358130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.358138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.358461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.358469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.358819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.358827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.359149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.359156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.359464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.359472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.359794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.359802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.526 [2024-07-15 14:16:35.359996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.526 [2024-07-15 14:16:35.360006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.526 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.360326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.360334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.360646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.360654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.360846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.360854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.361199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.361207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.361526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.361533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.361864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.361873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.362194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.362202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.362527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.362534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.362842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.362850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.363037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.363045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.363237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.363245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.363606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.363613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.364001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.364009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.364338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.364346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.364503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.364511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.364700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.364707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.365063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.365072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.365413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.365421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.365765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.365773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.365968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.365975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.366309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.366317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.366632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.366640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.366954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.366962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.367286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.367294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.367638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.367645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.367994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.368003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.368294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.368302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.368684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.368692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.527 [2024-07-15 14:16:35.368842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.527 [2024-07-15 14:16:35.368849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.527 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.369027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.369034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.369394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.369729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.369737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.370063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.370070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.370265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.370272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.370620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.370627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.370938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.370946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.371294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.371301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.371646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.371654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.371885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.371894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.372237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.372246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.372566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.372574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.372879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.372886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.373064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.373071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.373276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.373284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.373602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.373609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.373806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.373815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.374127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.374135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.374462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.374810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.374818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.375135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.375144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.375463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.375471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.375814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.375821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.376019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.376027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.376363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.376370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.376686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.376693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.377105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.377113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.377485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.377492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.377835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.377843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.378026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.378033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.378219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.528 [2024-07-15 14:16:35.378226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.528 qpair failed and we were unable to recover it. 00:30:37.528 [2024-07-15 14:16:35.378577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.378585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.378907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.378915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.379254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.379262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.379302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.379310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.379622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.379629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.379780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.379788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.379977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.379985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.380305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.380313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.380645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.380653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.380850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.380858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.381195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.381202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.381555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.381563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.381612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.381620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.381896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.381904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.382251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.382259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.382615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.382623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.382823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.382831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.383158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.383166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.383485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.383800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.383811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.384136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.384144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.384462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.384469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.384628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.384635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.384816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.384824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.384999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.385006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.385302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.385309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.385544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.529 [2024-07-15 14:16:35.385552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.529 qpair failed and we were unable to recover it. 00:30:37.529 [2024-07-15 14:16:35.385740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.385748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.386092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.386100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.386415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.386423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.386769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.386777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.386969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.386977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.387148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.387155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.387517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.387525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.387710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.387718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.388044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.388052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.388209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.388217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.388383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.388391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.388693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.388701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.389093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.389102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.389446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.389650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.389658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.389986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.389994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.390295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.390303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.390600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.390608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.390962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.390970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.391126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.391134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.391326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.391333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.391675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.391683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.392006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.392014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.392214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.392223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.392392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.392399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.392742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.392749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.393076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.393084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.393412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.393421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.393728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.393736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.394054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.394061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.530 qpair failed and we were unable to recover it. 00:30:37.530 [2024-07-15 14:16:35.394411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.530 [2024-07-15 14:16:35.394420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.394766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.394774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.395083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.395092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.395432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.395440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.395783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.395791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.396100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.396108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.396307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.396315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.396613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.396621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.396816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.396824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.397105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.397112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.397402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.397410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.397718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.397725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.398089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.398097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.398450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.398458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.398652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.398660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.398987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.398995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.399200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.399208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.399393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.399400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.399723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.399730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.399924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.399932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.400170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.400178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.400486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.400494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.400836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.400844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.401177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.401184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.401514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.401522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.401703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.401711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.402008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.402016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.402362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.402370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.402718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.402726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.403085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.403093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.403416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.403423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.403766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.403774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.403974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.531 [2024-07-15 14:16:35.403982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.531 qpair failed and we were unable to recover it. 00:30:37.531 [2024-07-15 14:16:35.404173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.404180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.404366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.404374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.404669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.404677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.404986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.404994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.405341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.405349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.405732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.405739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.405925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.405933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.405970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.405978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.406270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.406277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.406596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.406605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.406933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.406940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.407236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.407244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.407552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.407561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.407745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.407757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.408083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.408090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.408411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.408419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.408617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.408810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.408819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.409160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.409168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.409513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.409521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.409717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.409726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.409825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.409834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.410115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.410123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.410453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.410461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.410809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.410817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.411138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.411145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.411472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.411480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.411673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.411683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.412012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.412020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.412340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.412347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.412541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.412550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.412865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.412873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.532 qpair failed and we were unable to recover it. 00:30:37.532 [2024-07-15 14:16:35.413058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.532 [2024-07-15 14:16:35.413065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.413401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.413409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.413732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.413741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.413911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.413919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.414203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.414212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.414540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.414548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.414699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.414707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.414893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.414901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.415248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.415256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.415572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.415580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.415773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.415781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.415974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.415982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.416305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.416312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.416627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.416635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.416988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.416996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.417314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.417322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.417520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.417528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.417834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.417842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.418050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.418058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.418377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.418384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.418571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.418578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.418776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.418784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.419083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.419090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.419277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.419284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.419601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.419608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.419952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.419959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.420272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.420280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.420476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.420484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.420662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.420670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.420705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.420711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.421022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.421352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.421360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.421703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.421712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.533 qpair failed and we were unable to recover it. 00:30:37.533 [2024-07-15 14:16:35.421750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.533 [2024-07-15 14:16:35.421761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.422079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.422244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.422252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.422606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.422614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.422928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.422936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.423279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.423287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.423474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.423481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.423683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.423691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.423850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.423858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.424220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.424227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.424556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.424563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.424837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.424848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.425207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.425215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.425527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.425534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.425684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.425691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.426011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.426019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.426345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.426353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.426676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.426683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.426998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.427006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.427199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.427207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.427548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.427556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.427741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.428099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.428106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.428296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.428305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.428652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.428660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.428801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.428808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.429131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.429139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.429483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.429491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.429813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.429821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.430067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.430075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.430426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.430434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.430742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.430750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.430953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.534 [2024-07-15 14:16:35.430961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.534 qpair failed and we were unable to recover it. 00:30:37.534 [2024-07-15 14:16:35.431140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.431147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.431322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.431330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.431648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.431656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.431978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.431986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.432326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.432334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.432477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.432484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.432651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.432659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.433010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.433018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.433340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.433530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.433538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.433821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.433829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.434147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.434155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.434192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.434198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.434504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.434512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.434835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.434843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.435010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.435019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.435215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.435223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.435562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.435570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.435890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.435900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.436215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.436223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.436552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.436560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.436906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.535 [2024-07-15 14:16:35.436914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.535 qpair failed and we were unable to recover it. 00:30:37.535 [2024-07-15 14:16:35.437044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.437051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.437272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.437280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.437610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.437618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.437816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.437825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.437992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.438000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.438040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.438046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.438235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.438243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.438622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.438629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.438858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.438866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.439201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.439209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.439410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.439418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.439464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.439472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.439785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.439793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.440139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.440147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.440463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.440472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.440831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.440839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.441016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.441025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.441236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.441243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.441570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.441577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.441876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.441884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.442217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.442224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.442425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.442434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.442762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.442770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.443092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.443100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.443281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.443289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.443469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.443477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.443687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.443696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.443979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.443986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.444323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.444331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.444646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.444654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.444851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.444860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.445185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.445193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.536 [2024-07-15 14:16:35.445537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.536 [2024-07-15 14:16:35.445545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.536 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.445868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.445876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.446212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.446220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.446577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.446585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.446895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.446904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.447242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.447249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.447595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.447603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.447951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.447960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.448138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.448145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.448492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.448500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.448729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.448737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.448920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.448928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.449100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.449270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.449277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.449467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.449475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.449807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.449815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.450145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.450153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.450472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.450479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.450842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.450850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.451009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.451017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.451335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.451342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.451532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.451539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.451842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.451849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.452178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.452186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.452370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.452378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.452699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.452707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.453037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.453045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.453393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.453400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.453754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.453761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.454063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.454070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.454255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.454263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.454600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.454608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.454916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.454924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.455270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.455278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.537 qpair failed and we were unable to recover it. 00:30:37.537 [2024-07-15 14:16:35.455576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.537 [2024-07-15 14:16:35.455583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.455647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.455653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.455819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.455828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.456151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.456159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.456511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.456519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.456702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.456711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.457039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.457046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.457233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.457241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.457578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.457585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.457907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.457915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.458094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.458103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.458444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.458451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.458633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.458640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.458974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.458982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.459263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.459270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.459508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.459516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.459864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.459872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.460036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.460044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.460328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.460335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.460676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.460684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.460865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.460873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.460911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.460919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.461227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.461234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.461540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.461806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.461814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.461852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.461858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.462176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.462184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.462520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.462527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.462865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.462873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.463237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.463244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.463590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.463599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.463944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.463952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.464267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.538 [2024-07-15 14:16:35.464275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.538 qpair failed and we were unable to recover it. 00:30:37.538 [2024-07-15 14:16:35.464675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.464683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.465011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.465019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.465433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.465441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.465786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.465794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.465985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.465993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.466333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.466340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.466465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.466472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.466747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.466761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.467058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.467066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.467259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.467267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.467437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.467445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.467781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.467789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.468090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.468098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.468322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.468331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.468668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.468675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.468984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.468992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.469313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.469321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.469419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.469432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.469724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.469731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.469939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.469948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.470256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.470264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.470570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.470577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.470925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.470932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.471267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.471274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.471465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.471472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.471806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.471814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.472137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.472144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.472289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.472296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.472462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.472470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.472768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.472777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.539 [2024-07-15 14:16:35.473090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.539 [2024-07-15 14:16:35.473097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.539 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.473426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.473434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.473622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.473630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.473981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.473989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.474187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.474195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.474521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.474529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.474832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.474840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.475006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.475013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.475303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.475310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.475622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.475630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.475977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.475985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.476358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.476366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.476673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.476681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.477022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.477029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.477359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.477367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.477717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.477725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.478056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.478065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.478356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.478364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.478710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.478719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.479006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.479013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.479323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.479330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.479684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.479692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.480015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.480023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.480346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.480354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.480657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.480665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.540 [2024-07-15 14:16:35.481007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.540 [2024-07-15 14:16:35.481016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.540 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.481209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.481217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.481585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.481594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.481922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.481931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.482143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.482151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.482481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.482488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.482835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.482843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.483024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.483032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.483388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.483396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.483757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.483765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.484078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.484086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.484239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.484246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.484554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.484562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.484761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.484769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.484957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.484965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.485236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.485244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.485564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.485571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.485921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.485928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.486169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.486177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.486498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.486506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.486859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.486871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.487214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.487222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.487420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.487428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.487618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.487626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.487971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.487979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.488021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.488027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.488314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.488322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.488668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.488675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.489003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.489012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.489419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.489427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.489729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.541 [2024-07-15 14:16:35.489737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.541 qpair failed and we were unable to recover it. 00:30:37.541 [2024-07-15 14:16:35.489937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.489944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.490274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.490282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.490622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.490630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.490962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.490970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.491315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.491323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.491674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.491682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.491998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.492006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.492188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.492195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.492510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.492517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.492837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.492845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.493077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.493085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.493271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.493281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.493633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.493641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.493989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.493996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.494304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.494312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.494673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.494681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.494881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.494890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.495060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.495067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.495129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.495135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.495516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.495523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.495905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.495913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.496257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.496265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.496586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.496594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.496793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.496802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.496974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.496982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.497286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.497294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.497579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.497587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.497924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.497931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.498156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.498164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.498475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.498483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.542 qpair failed and we were unable to recover it. 00:30:37.542 [2024-07-15 14:16:35.498789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.542 [2024-07-15 14:16:35.498797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.499094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.499101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.499464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.499471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.499669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.499678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.499994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.500002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.500331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.500338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.500687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.500694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.501014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.501022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.501344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.501352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.501692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.501699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.502064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.502072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.502258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.502266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.502466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.502473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.502640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.502648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.502832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.502840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.503159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.503167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.503444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.503452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.503648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.503655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.503946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.503954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.504279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.504286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.504632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.504639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.504837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.504848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.505189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.505197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.505379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.505386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.505739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.505746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.505952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.505960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.506151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.506159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.506486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.506493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.506688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.506696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.506918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.506926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.543 [2024-07-15 14:16:35.507254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.543 [2024-07-15 14:16:35.507262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.543 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.507466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.507474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.507667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.507675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.508002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.508323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.508331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.508715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.508723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.508928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.508935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.509294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.509302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.509620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.509628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.509894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.509902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.510232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.510240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.510549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.510557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.510868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.510876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.511046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.511053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.511401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.511409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.511755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.511763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.511958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.511966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.512306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.512314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.512655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.512663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.512991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.512999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.513192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.513200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.513529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.513537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.513847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.513856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.514199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.514207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.514545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.514553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.514794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.514803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.515150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.515158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.515489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.515498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.515834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.515843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.516040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.516048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.516255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.516264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.516434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.516443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.516643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.516651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.517051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.517060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.544 qpair failed and we were unable to recover it. 00:30:37.544 [2024-07-15 14:16:35.517388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.544 [2024-07-15 14:16:35.517396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.517728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.517736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.517951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.517959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.518151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.518161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.518349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.518357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.518710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.518718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.519058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.519067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.519420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.519429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.519787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.519795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.519980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.519989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.520186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.520194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.520534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.520543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.520795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.520804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.521128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.521136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.521484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.521492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.521683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.521691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.522017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.522025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.522210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.522219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.522549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.522556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.522831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.522839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.523168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.523175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.523338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.523444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.523451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.523777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.523785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.524155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.524163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.524497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.524505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.524700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.524708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.524747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.524757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.524990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.524998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.525348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.525356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.525691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.525699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.525801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.525810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.525873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.525879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.526216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.526224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.545 qpair failed and we were unable to recover it. 00:30:37.545 [2024-07-15 14:16:35.526431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.545 [2024-07-15 14:16:35.526439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.526760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.526768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.526971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.526981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.527308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.527317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.527627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.527634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.527829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.527838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.528177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.528185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.528531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.528871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.528879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.529165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.529173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.529493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.529501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.529735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.529743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.530119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.530127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.530354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.530361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.530694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.530703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.530796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.530804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.530955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.530962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.531301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.531310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.531702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.531710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.531998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.532006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.532360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.532368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.532712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.533052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.533060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.533250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.533258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.533606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.533613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.533802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.533810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.534100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.534108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.534428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.546 [2024-07-15 14:16:35.534435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.546 qpair failed and we were unable to recover it. 00:30:37.546 [2024-07-15 14:16:35.534701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.534710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.534882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.534891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.535095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.535103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.535152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.535160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.535330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.535338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.535669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.535677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.535876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.535884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.536055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.536062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.536259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.536266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.536582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.536590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.536774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.536783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.536998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.537006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.537219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.537227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.537507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.537515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.537680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.537869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.538148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.538156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.538348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.538357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.538630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.538637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.538948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.538957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.539270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.539614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.539624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.539799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.539807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.540108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.540117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.540432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.540441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.540760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.540768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.541119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.541127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.541439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.541447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.541776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.541785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.547 [2024-07-15 14:16:35.542102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.547 [2024-07-15 14:16:35.542110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.547 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.542252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.542260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.542561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.542569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.542892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.542900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.543234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.543242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.543576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.543584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.543771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.543779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.544028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.544035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.544379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.544387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.544615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.544624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.544826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.544835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.545178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.545186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.545508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.545517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.545877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.545886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.546222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.546230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.546414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.546423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.546754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.546762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.547083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.547091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.547457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.547466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.547816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.547824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.548145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.548153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.548473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.548825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.548833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.549172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.549180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.549503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.549511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.549656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.549663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.549990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.549998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.550182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.550190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.550367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.550375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.550706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.551028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.551036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.548 [2024-07-15 14:16:35.551278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.548 [2024-07-15 14:16:35.551286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.548 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.551632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.551639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.551841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.551849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.552031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.552039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.552372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.552380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.552565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.552573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.552914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.552922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.552969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.552975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.553280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.553288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.553657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.553665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.553992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.554000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.554358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.554366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.554527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.554535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.554838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.555187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.555195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.555462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.555471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.555807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.555816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.556137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.556144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.556493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.556500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.556845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.556854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.557050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.557058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.557424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.557612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.557624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.557935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.557943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.558112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.558120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.558434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.558442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.558781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.558789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.558975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.558983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.559307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.559315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.559635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.559643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.559839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.559848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.560182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.560190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.560525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.560533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.560864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.560872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.549 qpair failed and we were unable to recover it. 00:30:37.549 [2024-07-15 14:16:35.561188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.549 [2024-07-15 14:16:35.561196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.561392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.561400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.561701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.561709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.562071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.562079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.562405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.562412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.562757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.562765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.563094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.563102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.563411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.563420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.563621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.563628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.563934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.563942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.564262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.564269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.564454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.564471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.564805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.564813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.565009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.565017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.565353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.565361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.565558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.565566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.565912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.565920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.566240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.566247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.566591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.566598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.566639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.566645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.566968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.567308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.567315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.567628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.567636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.567993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.568002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.568241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.568249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.568590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.568598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.568928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.568936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.569126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.569134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.569453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.569463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.569811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.569819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.550 [2024-07-15 14:16:35.570150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.550 [2024-07-15 14:16:35.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.550 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.570313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.570321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.570646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.570654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.570843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.570852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.571163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.571171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.571511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.571519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.571871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.571879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.572248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.572256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.572586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.572594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.572960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.573169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.573177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.573497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.573506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.573830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.573839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.574184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.574192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.574504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.574512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.574861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.574870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.575061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.575069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.575409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.575417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.575762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.575771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.576128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.576135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.576449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.576457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.576779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.576787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.577153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.577160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.577340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.577348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.551 qpair failed and we were unable to recover it. 00:30:37.551 [2024-07-15 14:16:35.577684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.551 [2024-07-15 14:16:35.577692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.578009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.578017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.578337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.578345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.578667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.578675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.579036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.579383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.579546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.579631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.579820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.579998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.580005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.580348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.580358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.580546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.580554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.580895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.580903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.581244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.581252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.581567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.581576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.581772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.581781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.582099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.582107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.582459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.582467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.582824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.582832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.583007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.583016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.583213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.583221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.583575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.583583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.583766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.583775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.583950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.583958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.584262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.584269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.584546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.584554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.584882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.584890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.585234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.585427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.585436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.585774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.585782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.586169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.552 [2024-07-15 14:16:35.586177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.552 qpair failed and we were unable to recover it. 00:30:37.552 [2024-07-15 14:16:35.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.586562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.586850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.586858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.587203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.587211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.587399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.587407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.587742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.587749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.588064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.588071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.588252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.588261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.588597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.588604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.588676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.588682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.588965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.588972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.589319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.589327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.589508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.589516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.589897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.589905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.590231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.590239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.590551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.590559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.590755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.590763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.591080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.591088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.591419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.591427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.591780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.591788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.592113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.592120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.592315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.592323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.592647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.592654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.592983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.593315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.593325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.593515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.593523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.593797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.593805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.594191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.594199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.594372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.594380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.594654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.594663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.595010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.595182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.595190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.553 qpair failed and we were unable to recover it. 00:30:37.553 [2024-07-15 14:16:35.595487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.553 [2024-07-15 14:16:35.595495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.595802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.595810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.596008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.596016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.596363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.596370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.596564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.596572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.596617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.596623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.596937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.596945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.597277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.597285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.597506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.597515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.597715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.597723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.598049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.598058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.598230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.598238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.598628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.598636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.598868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.598876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.599072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.599081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.599398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.599406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.599641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.599649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.599941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.599948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.600116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.600126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.600452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.600460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.600658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.600666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.600836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.600844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.601194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.601202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.601515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.601522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.601703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.601711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.602029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.602037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.602345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.602353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.602669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.602677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.602970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.602978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.603304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.554 [2024-07-15 14:16:35.603312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.554 qpair failed and we were unable to recover it. 00:30:37.554 [2024-07-15 14:16:35.603643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.603651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.603929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.603938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.604111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.604304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.604313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.604504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.604513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.604867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.604876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.605260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.605267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.605598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.605606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.605931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.605940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.606247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.606256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.606450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.606458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.606791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.607142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.607151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.607473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.607482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.607724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.608035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.608043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.608237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.608246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.608536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.608732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.608740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.609047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.609056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.609421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.609431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.609718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.609727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.610040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.610048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.610246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.610255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.610610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.610620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.610944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.610952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.611000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.611006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.611292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.611300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.611602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.611610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.611960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.611968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.612282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.612290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.612536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.612544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.612867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.555 [2024-07-15 14:16:35.612876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.555 qpair failed and we were unable to recover it. 00:30:37.555 [2024-07-15 14:16:35.613198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.613207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.613367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.613376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.613691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.613700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.613891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.613899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.614229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.614238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.614558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.614566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.614810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.614818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.614973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.614981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.615283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.615290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.615607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.615619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.615933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.615941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.615987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.615993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.616311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.616320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.616649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.616657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.616974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.617310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.617319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.617501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.617510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.617814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.617823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.618187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.618195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.618519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.618527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.618709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.618717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.618949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.618957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.619132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.619141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.619515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.619524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.619848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.619858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.620052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.620061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.620241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.620249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.620591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.620599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.620930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.620939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.621286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.621294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.556 [2024-07-15 14:16:35.621619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.556 [2024-07-15 14:16:35.621627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.556 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.621781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.621789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.622121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.622129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.622495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.622504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.622832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.622841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.623182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.623190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.623384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.623393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.623684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.623692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.624007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.624016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.624213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.624221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.557 [2024-07-15 14:16:35.624385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.557 [2024-07-15 14:16:35.624392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.557 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.624689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.624698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.625027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.625036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.625357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.625366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.625550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.625559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.625608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.625617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.625962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.625971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.626321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.626329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.626522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.626530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.626755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.626766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.627087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.627444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.627452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.627771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.627781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.627976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.627985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.628312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.628320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.628514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.628522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.628808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.628817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.629174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.629183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.629363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.629373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.629686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.629694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.630045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.630053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.630381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.630390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.630563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.630572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.630762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.630771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.630950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.630959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.631146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.631154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.631330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.631339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-15 14:16:35.631646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-15 14:16:35.631655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.631985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.631994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.632347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.632355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.632705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.632714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.633802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.633808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.634177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.634185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.634551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.634559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.634880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.634889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.635067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.635076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.635406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.635414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.635637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.635646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.635987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.635996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.636323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.636331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.636683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.636691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.637018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.637026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.637358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.637366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.637558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.637865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.637875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.638212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.638220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.638539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.638547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.638906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.638915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.639254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.639263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.639635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.639692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.639700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.639979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.639988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.640316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.640324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.640652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.640660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.640868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.640876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.641078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.641087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.641429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.641437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.641798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.641807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.641990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.641999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.642163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.642171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.642368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.642377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.642698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.642706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.827 [2024-07-15 14:16:35.643025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-15 14:16:35.643034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.643071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.643078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.643394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.643403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.643731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.643739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.644135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.644144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.644345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.644354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.644702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.644710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.645042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.645051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.645235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.645432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.645440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.645628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.645637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.645972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.645980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.646134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.646144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.646489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.646498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.646843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.646852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.647012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.647332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.647340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.647696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.647704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.647896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.647904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.648194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.648203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.648551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.648559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.648879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.648888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.649202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.649212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.649518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.649527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.649567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.649574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.649748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.649760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.650154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.650162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.650469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.650478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.650799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.650808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.650961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.650969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.651276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.651284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.651583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.651591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.651942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.651951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.652306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.652640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.652647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.652988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.652997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.653356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.653364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.653552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.653560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.653922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.653930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.654286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.654294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.828 qpair failed and we were unable to recover it. 00:30:37.828 [2024-07-15 14:16:35.654601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.828 [2024-07-15 14:16:35.654608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.655025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.655034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.655374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.655382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.655703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.655711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.655896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.655904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.656114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.656122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.656405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.656412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.656759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.656767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.657109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.657116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.657456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.657464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.657797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.657806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.657996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.658004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.658311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.658318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.658677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.658685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.659023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.659031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.659186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.659194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.659512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.659519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.659882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.659890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.660215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.660222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.660371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.660378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.660582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.660590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.660821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.660830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.661136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.661145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.661485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.661493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.661831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.661839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.662242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.662250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.662417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.662424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.662722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.662729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.663042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.663050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.663252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.663261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.663583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.663591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.663923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.663931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.664241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.664250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.664433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.664442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.664615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.664624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.664804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.664812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.665016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.665024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.665217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.665225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.665567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.665575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.665756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.829 [2024-07-15 14:16:35.665764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.829 qpair failed and we were unable to recover it. 00:30:37.829 [2024-07-15 14:16:35.666119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.666127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.666455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.666463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.666739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.666747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.666935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.666943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.667156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.667163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.667346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.667354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.667550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.667558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.667871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.667878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.668100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.668109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.668319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.668327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.668453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.668459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.668696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.668704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.668873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.668881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.669063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.669071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.669418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.669426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.669756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.669764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.670119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.670127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.670428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.670436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.670615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.670953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.670961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.671146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.671153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.671348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.671355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.671723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.671734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.672087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.672095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.672411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.672419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.672742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.672754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.672947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.672956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.673197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.673205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.673540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.673549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.673746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.673757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.673947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.673955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.674291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.830 [2024-07-15 14:16:35.674299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.830 qpair failed and we were unable to recover it. 00:30:37.830 [2024-07-15 14:16:35.674639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.674647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.674849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.674858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.675053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.675061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.675260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.675268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.675588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.675596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.675782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.675790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.675987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.675995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.676322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.676330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.676713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.676721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.677085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.677093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.677415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.677423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.677735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.677743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.678088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.678095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.678444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.678453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.678772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.678780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.679102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.679109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.679308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.679316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.679670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.679677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.679865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.679874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.680138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.680146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.680489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.680497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.680798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.680806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.680999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.681006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.681226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.681234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.681598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.681606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.681801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.681809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.681886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.681893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.682045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.682052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.682410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.682733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.682741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.683076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.683085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.683280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.683288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.683492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.683721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.683777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.683786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.684096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.684103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.684450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.684459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.684858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.684866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.685083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.685091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.685420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.831 [2024-07-15 14:16:35.685428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.831 qpair failed and we were unable to recover it. 00:30:37.831 [2024-07-15 14:16:35.685749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.685759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.686084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.686092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.832 [2024-07-15 14:16:35.686393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.686401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:37.832 [2024-07-15 14:16:35.686576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.686586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:37.832 [2024-07-15 14:16:35.686905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.686913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.832 [2024-07-15 14:16:35.687244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.687253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.832 [2024-07-15 14:16:35.687578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.687586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.687916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.687926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.688105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.688113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.688413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.688421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.688745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.688755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.688920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.688927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.689164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.689171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.689397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.689404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.689711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.689719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.690031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.690040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.690360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.690367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.690565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.690572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.690875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.690882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.691206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.691213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.691409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.691416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.691770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.691778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.692103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.692309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.692317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.692559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.692566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.692993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.693001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.693311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.693518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.693526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.693854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.693863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.694185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.694192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.694584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.694591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.694792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.694801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.695110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.695117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.695468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.695476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.695517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.695524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.695723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.695731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.695939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.696283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.696290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.832 [2024-07-15 14:16:35.696651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.832 [2024-07-15 14:16:35.696658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.832 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.696989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.696996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.697315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.697322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.697671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.697678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.697841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.697849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.698077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.698084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.698405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.698412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.698711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.698719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.699024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.699031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.699356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.699362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.699731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.699737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.700059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.700066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.700545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.700553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.700759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.700767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.701103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.701112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.701310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.701316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.701508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.701515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.701969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.701978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.702309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.702315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.702471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.702477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.702868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.702875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.703195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.703202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.703544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.703550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.703883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.703891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.704075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.704082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.704234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.704241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.704559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.704566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.704758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.704766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.705085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.705093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.705443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.705450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.705771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.705778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.705869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.705875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.706228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.706235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.706541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.706548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.706749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.706766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.707070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.707076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.707276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.707283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.707469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.707477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.707677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.707685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.833 [2024-07-15 14:16:35.707865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.833 [2024-07-15 14:16:35.707873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.833 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.708177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.708183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.708512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.708519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.708713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.708720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.709058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.709065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.709245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.709252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.709540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.709547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.709882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.709889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.710271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.710278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.710455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.710463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.710627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.710634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.710843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.710851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.711237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.711245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.711371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.711377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.711566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.711574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.711748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.711761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.712091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.712099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.712507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.712514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.712843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.712864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.713051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.713059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.713337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.713344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.713649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.713655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.713979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.713986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.714328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.714335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.714662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.714669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.714838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.714844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.715131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.715138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.715457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.715464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.715815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.715823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.716158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.716165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.716363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.716371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.716582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.716588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.716772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.716780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.717164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.717171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.717563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.717570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.717769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.717777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.717967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.717973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.718276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.718282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.718478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.718486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.718715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.718722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.834 [2024-07-15 14:16:35.718918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.834 [2024-07-15 14:16:35.718925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.834 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.719156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.719163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.719352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.719359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.719704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.719711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.719888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.720071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.720078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.720283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.720291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.720575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.720581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.720755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.720762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.721001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.721008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.721203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.721211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.721556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.721564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.721876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.721884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.722201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.722207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.722543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.722550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.722714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.722720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.723106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.723113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.723429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.723435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.723639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.723647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.723688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.723694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.835 [2024-07-15 14:16:35.724012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.724021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.724262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.724269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:37.835 [2024-07-15 14:16:35.724598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.835 [2024-07-15 14:16:35.724962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.835 [2024-07-15 14:16:35.724971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.725316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.725323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.725625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.725632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.725674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.725680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.726027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.726035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.726412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.726419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.726584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.726592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.726912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.726919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.727089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.727096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.727375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.835 [2024-07-15 14:16:35.727382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.835 qpair failed and we were unable to recover it. 00:30:37.835 [2024-07-15 14:16:35.727697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.727704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.727873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.727879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.728051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.728058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.728415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.728422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.728724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.728730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.728929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.728937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.729098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.729105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f989c000b90 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.729643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.729681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.730023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.730037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.730374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.730384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.730708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.730723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.731027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.731037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.731390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.731401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.731588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.731598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.731804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.731816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.732018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.732027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.732357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.732367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.732566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.732576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.732889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.732900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.733084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.733094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.733279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.733289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.733706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.733716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.734074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.734084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.734446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.734457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.734684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.734695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.735002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.735012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.735211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.735221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.735522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.735531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.735891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.735902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.736206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.736216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.736531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.736540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.736787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.736798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.736977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.736987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.737172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.737183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.737394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.737403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.737688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.737697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.738032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.738043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.738188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.738200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.738308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.738317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.738629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.738638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.836 [2024-07-15 14:16:35.738817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.836 [2024-07-15 14:16:35.738827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.836 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.738881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.738891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.739101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.739110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.739419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.739428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.739730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.739740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.740054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.740064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.740390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.740406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 Malloc0 00:30:37.837 [2024-07-15 14:16:35.740707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.740719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.741053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.741064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.837 [2024-07-15 14:16:35.741452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.741463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:37.837 [2024-07-15 14:16:35.741640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.741652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.837 [2024-07-15 14:16:35.741976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.741987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.837 [2024-07-15 14:16:35.742165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.742176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.742534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.742544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.742882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.742893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.743226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.743236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.743536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.743545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.743600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.743609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.743931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.743941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.744346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.744356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.744596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.744606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.744810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.744820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.745148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.745158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.745334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.745344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.745543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.745554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.745940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.745950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.746265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.746274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.746595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.746605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.746951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.746961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.747273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.747283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.747610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.747619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.747737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.747746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.747974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.747984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.748268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.748278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.748303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.837 [2024-07-15 14:16:35.748596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.748605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.748917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.748928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.749115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.749129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.749485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.837 [2024-07-15 14:16:35.749495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.837 qpair failed and we were unable to recover it. 00:30:37.837 [2024-07-15 14:16:35.749810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.749820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.750142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.750153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.750478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.750488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.750688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.750698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.751019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.751029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.751355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.751367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.751563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.751769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.751782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.752092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.752102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.752337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.752348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.752676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.752987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.752997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.753069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.753078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.753378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.753388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.753716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.753726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.754013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.754024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.754325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.754336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.754657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.754669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.754999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.755010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.755355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.755366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.755687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.755698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.756100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.756111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.756429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.756443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.756654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.756665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.756972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.756984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.838 [2024-07-15 14:16:35.757333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.757344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.838 [2024-07-15 14:16:35.757557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.757568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.757757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.757768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.838 [2024-07-15 14:16:35.757969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.757980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.838 [2024-07-15 14:16:35.759228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.759252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.759588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.759599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.759942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.759954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.760273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.760284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.760598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.760609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.760954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.760966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.761341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.761351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.761581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.761592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.761857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.761871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.762192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.838 [2024-07-15 14:16:35.762203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.838 qpair failed and we were unable to recover it. 00:30:37.838 [2024-07-15 14:16:35.762395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.762406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.762722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.763060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.763071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.763428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.763439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.763796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.763807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.764173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.764183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.764377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.764392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.764664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.764674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.764866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.764878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.839 [2024-07-15 14:16:35.765164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.765175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.765434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.765445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.839 [2024-07-15 14:16:35.765777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.765788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.765984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.839 [2024-07-15 14:16:35.765995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.766844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.766869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.767087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.767099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.767422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.767433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.767622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.767633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.767954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.767965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.768164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.768175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.768533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.768544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.768738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.768748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.769069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.769079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.769244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.769254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.769537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.769548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.769884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.769895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.769955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.769963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.770275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.770286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.770500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.770511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.770842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.770853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.771037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.771049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.771350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.771361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.771687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.771698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.772049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.772363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.839 [2024-07-15 14:16:35.772377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.839 qpair failed and we were unable to recover it. 00:30:37.839 [2024-07-15 14:16:35.772579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.772590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.772939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.772949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.840 [2024-07-15 14:16:35.773300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.773311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.840 [2024-07-15 14:16:35.773500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.773513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.840 [2024-07-15 14:16:35.773815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.773826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.840 [2024-07-15 14:16:35.774152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.774163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.774900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.774921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.775272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.775283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.775338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.775347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.775613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.775624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.775930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.775942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.776247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.840 [2024-07-15 14:16:35.776258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9a50 with addr=10.0.0.2, port=4420 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.776519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.840 [2024-07-15 14:16:35.778936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.779015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.779033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.779041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.779048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.779067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.840 [2024-07-15 14:16:35.788820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.788885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.788901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.788908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.788914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.788929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.840 14:16:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1573383 00:30:37.840 [2024-07-15 14:16:35.798940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.799041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.799057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.799065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.799071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.799086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.808759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.808824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.808841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.808848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.808855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.808869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.818888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.818953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.818968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.818979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.818985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.818999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.828803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.828907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.828923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.828930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.828937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.828951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.838929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.838987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.839002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.839009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.839015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.839029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.848952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.849013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.840 [2024-07-15 14:16:35.849028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.840 [2024-07-15 14:16:35.849035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.840 [2024-07-15 14:16:35.849041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.840 [2024-07-15 14:16:35.849055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.840 qpair failed and we were unable to recover it. 00:30:37.840 [2024-07-15 14:16:35.858889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.840 [2024-07-15 14:16:35.858959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.858973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.858981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.858987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.859001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.868998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.869056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.869071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.869078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.869085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.869098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.878903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.878959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.878974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.878981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.878987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.879001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.889067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.889168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.889183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.889191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.889197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.889210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.899102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.899167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.899181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.899189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.899195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.899208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.909185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.909247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.909262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.909273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.909279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.909294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.919054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.919120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.919135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.919142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.919148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.919162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:37.841 [2024-07-15 14:16:35.929139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.841 [2024-07-15 14:16:35.929193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.841 [2024-07-15 14:16:35.929208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.841 [2024-07-15 14:16:35.929216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.841 [2024-07-15 14:16:35.929222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:37.841 [2024-07-15 14:16:35.929236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.841 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.939202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.939267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.939282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.939289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.939296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.939309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.949111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.949217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.949233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.949240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.949247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.949260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.959263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.959359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.959375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.959382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.959389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.959403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.969266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.969330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.969345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.969353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.969359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.969373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.979323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.979388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.979403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.979411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.979417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.979431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.989354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.989451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.989467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.989474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.989481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.989495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:35.999370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:35.999428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:35.999442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:35.999453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:35.999459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:35.999473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:36.009394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:36.009452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:36.009467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:36.009475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:36.009481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:36.009495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:36.019327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:36.019389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:36.019404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:36.019411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:36.019418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:36.019431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:36.029462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:36.029516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:36.029530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:36.029537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:36.029544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:36.029558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.103 [2024-07-15 14:16:36.039497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.103 [2024-07-15 14:16:36.039557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.103 [2024-07-15 14:16:36.039582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.103 [2024-07-15 14:16:36.039590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.103 [2024-07-15 14:16:36.039597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.103 [2024-07-15 14:16:36.039617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.103 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.049488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.049546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.049563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.049570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.049577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.049592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.059536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.059594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.059610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.059617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.059623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.059637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.069549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.069604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.069620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.069627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.069633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.069647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.079592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.079648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.079663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.079671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.079677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.079691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.089593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.089681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.089699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.089707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.089714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.089728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.099687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.099794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.099810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.099818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.099825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.099839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.109723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.109776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.109792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.109799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.109805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.109819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.119700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.119757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.119772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.119779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.119786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.119800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.129721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.129782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.129797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.129804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.129811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.129824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.139755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.139813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.139828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.139835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.139842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.139855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.149656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.149728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.149743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.149754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.149761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.149775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.159699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.159764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.159780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.159788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.159794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.159808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.169848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.169903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.169918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.169925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.169932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.169946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.179909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.179986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.104 [2024-07-15 14:16:36.180004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.104 [2024-07-15 14:16:36.180011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.104 [2024-07-15 14:16:36.180018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.104 [2024-07-15 14:16:36.180032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.104 qpair failed and we were unable to recover it. 00:30:38.104 [2024-07-15 14:16:36.189896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.104 [2024-07-15 14:16:36.189958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.105 [2024-07-15 14:16:36.189973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.105 [2024-07-15 14:16:36.189980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.105 [2024-07-15 14:16:36.189987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.105 [2024-07-15 14:16:36.190000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.105 qpair failed and we were unable to recover it. 00:30:38.105 [2024-07-15 14:16:36.199830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.105 [2024-07-15 14:16:36.199937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.105 [2024-07-15 14:16:36.199952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.105 [2024-07-15 14:16:36.199960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.105 [2024-07-15 14:16:36.199966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.105 [2024-07-15 14:16:36.199980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.105 qpair failed and we were unable to recover it. 00:30:38.105 [2024-07-15 14:16:36.209942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.105 [2024-07-15 14:16:36.209998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.105 [2024-07-15 14:16:36.210013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.105 [2024-07-15 14:16:36.210020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.105 [2024-07-15 14:16:36.210027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.105 [2024-07-15 14:16:36.210040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.105 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.219978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.220038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.220053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.220060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.220067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.220084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.229990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.230043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.230057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.230065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.230071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.230084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.240066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.240127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.240142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.240149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.240156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.240170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.250058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.250116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.250131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.250139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.250145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.250159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.260088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.260151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.260166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.260173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.260179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.260193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.270090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.270173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.270192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.270199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.270205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.270219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.280258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.280334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.280349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.366 [2024-07-15 14:16:36.280356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.366 [2024-07-15 14:16:36.280362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.366 [2024-07-15 14:16:36.280376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.366 qpair failed and we were unable to recover it. 00:30:38.366 [2024-07-15 14:16:36.290156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.366 [2024-07-15 14:16:36.290258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.366 [2024-07-15 14:16:36.290274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.290282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.290288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.290302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.300074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.300144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.300159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.300166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.300172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.300186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.310194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.310250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.310265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.310272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.310279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.310296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.320235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.320301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.320317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.320324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.320330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.320344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.330271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.330365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.330381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.330388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.330395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.330408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.340298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.340364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.340379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.340387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.340393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.340407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.350318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.350374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.350389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.350396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.350403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.350418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.360338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.360393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.360412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.360419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.360425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.360439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.370393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.370450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.370465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.370472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.370479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.370492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.380378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.380449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.380473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.380482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.380489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.380508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.390430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.390489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.390514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.390523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.390529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.390548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.400444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.400498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.400515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.400523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.400529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.400549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.410385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.410478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.410494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.410502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.410508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.410522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.420533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.420603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.420628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.420637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.420644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.420663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.367 [2024-07-15 14:16:36.430449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.367 [2024-07-15 14:16:36.430506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.367 [2024-07-15 14:16:36.430523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.367 [2024-07-15 14:16:36.430531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.367 [2024-07-15 14:16:36.430537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.367 [2024-07-15 14:16:36.430552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.367 qpair failed and we were unable to recover it. 00:30:38.368 [2024-07-15 14:16:36.440586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.368 [2024-07-15 14:16:36.440646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.368 [2024-07-15 14:16:36.440663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.368 [2024-07-15 14:16:36.440670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.368 [2024-07-15 14:16:36.440677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.368 [2024-07-15 14:16:36.440692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.368 qpair failed and we were unable to recover it. 00:30:38.368 [2024-07-15 14:16:36.450611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.368 [2024-07-15 14:16:36.450667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.368 [2024-07-15 14:16:36.450686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.368 [2024-07-15 14:16:36.450693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.368 [2024-07-15 14:16:36.450700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.368 [2024-07-15 14:16:36.450714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.368 qpair failed and we were unable to recover it. 00:30:38.368 [2024-07-15 14:16:36.460634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.368 [2024-07-15 14:16:36.460698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.368 [2024-07-15 14:16:36.460713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.368 [2024-07-15 14:16:36.460721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.368 [2024-07-15 14:16:36.460727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.368 [2024-07-15 14:16:36.460741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.368 qpair failed and we were unable to recover it. 00:30:38.368 [2024-07-15 14:16:36.470661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.368 [2024-07-15 14:16:36.470719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.368 [2024-07-15 14:16:36.470734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.368 [2024-07-15 14:16:36.470742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.368 [2024-07-15 14:16:36.470748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.368 [2024-07-15 14:16:36.470765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.368 qpair failed and we were unable to recover it. 00:30:38.629 [2024-07-15 14:16:36.480694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.629 [2024-07-15 14:16:36.480750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.629 [2024-07-15 14:16:36.480769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.629 [2024-07-15 14:16:36.480776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.629 [2024-07-15 14:16:36.480782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.629 [2024-07-15 14:16:36.480797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.629 qpair failed and we were unable to recover it. 00:30:38.629 [2024-07-15 14:16:36.490733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.490798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.490814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.490821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.490831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.490845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.500764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.500826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.500841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.500848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.500854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.500868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.510786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.510846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.510861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.510868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.510874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.510888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.520812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.520868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.520883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.520890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.520896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.520910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.530838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.531025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.531041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.531048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.531054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.531068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.540864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.540929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.540944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.540952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.540958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.540972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.550907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.550963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.550978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.550985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.550991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.551005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.560899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.560953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.560969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.560976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.560983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.560997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.570976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.571052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.571067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.571074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.571081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.571095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.580860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.580964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.580979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.580987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.580997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.581011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.590982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.591037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.591052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.591059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.591066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.591080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.600891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.600951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.600966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.600973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.600979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.600993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.610938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.610998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.611014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.611021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.611028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.611042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.621073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.621139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.621154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.630 [2024-07-15 14:16:36.621161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.630 [2024-07-15 14:16:36.621167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.630 [2024-07-15 14:16:36.621181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.630 qpair failed and we were unable to recover it. 00:30:38.630 [2024-07-15 14:16:36.631074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.630 [2024-07-15 14:16:36.631146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.630 [2024-07-15 14:16:36.631161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.631169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.631175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.631190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.641127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.641182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.641197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.641204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.641211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.641225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.651151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.651213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.651228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.651235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.651242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.651255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.661184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.661253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.661267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.661274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.661280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.661294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.671201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.671257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.671272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.671279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.671289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.671302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.681242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.681342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.681356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.681364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.681371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.681384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.691322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.691431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.691447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.691454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.691460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.691474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.701289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.701351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.701366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.701373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.701380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.701394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.711328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.711392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.711407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.711415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.711421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.711435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.721405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.721479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.721495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.721502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.721508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.721523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.731304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.731365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.731380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.731387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.731394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.731408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.631 [2024-07-15 14:16:36.741399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.631 [2024-07-15 14:16:36.741465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.631 [2024-07-15 14:16:36.741480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.631 [2024-07-15 14:16:36.741487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.631 [2024-07-15 14:16:36.741494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.631 [2024-07-15 14:16:36.741507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.631 qpair failed and we were unable to recover it. 00:30:38.893 [2024-07-15 14:16:36.751427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.893 [2024-07-15 14:16:36.751488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.893 [2024-07-15 14:16:36.751513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.893 [2024-07-15 14:16:36.751522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.893 [2024-07-15 14:16:36.751529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.893 [2024-07-15 14:16:36.751549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.893 qpair failed and we were unable to recover it. 00:30:38.893 [2024-07-15 14:16:36.761453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.893 [2024-07-15 14:16:36.761521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.893 [2024-07-15 14:16:36.761547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.893 [2024-07-15 14:16:36.761561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.893 [2024-07-15 14:16:36.761568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.893 [2024-07-15 14:16:36.761586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.893 qpair failed and we were unable to recover it. 00:30:38.893 [2024-07-15 14:16:36.771489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.893 [2024-07-15 14:16:36.771553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.893 [2024-07-15 14:16:36.771577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.893 [2024-07-15 14:16:36.771586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.893 [2024-07-15 14:16:36.771593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.893 [2024-07-15 14:16:36.771612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.893 qpair failed and we were unable to recover it. 00:30:38.893 [2024-07-15 14:16:36.781556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.893 [2024-07-15 14:16:36.781618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.893 [2024-07-15 14:16:36.781635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.893 [2024-07-15 14:16:36.781643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.893 [2024-07-15 14:16:36.781649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.893 [2024-07-15 14:16:36.781664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.893 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.791622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.791709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.791725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.791732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.791739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.791758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.801451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.801508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.801525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.801532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.801539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.801552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.811626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.811685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.811701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.811708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.811715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.811729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.821630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.821722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.821738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.821746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.821756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.821771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.831634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.831725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.831740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.831748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.831758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.831772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.841672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.841728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.841743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.841755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.841762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.841776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.851709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.851773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.851788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.851799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.851805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.851819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.861691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.861798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.861814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.861822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.861828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.861842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.871746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.871805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.871820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.871827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.871834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.871848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.881769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.881872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.881888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.881895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.881901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.881915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.891789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.891847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.891861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.891868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.891875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.891889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.901844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.901911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.901927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.901935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.901944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.901959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.911757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.911818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.911835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.911842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.911849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.911863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.921891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.921944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.894 [2024-07-15 14:16:36.921959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.894 [2024-07-15 14:16:36.921966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.894 [2024-07-15 14:16:36.921972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.894 [2024-07-15 14:16:36.921986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.894 qpair failed and we were unable to recover it. 00:30:38.894 [2024-07-15 14:16:36.931938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.894 [2024-07-15 14:16:36.931997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.932012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.932020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.932026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.932040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.941846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.941907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.941922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.941933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.941940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.941953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.951980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.952087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.952102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.952110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.952117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.952130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.962057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.962114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.962129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.962137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.962143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.962157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.972027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.972085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.972100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.972107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.972114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.972128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.982096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.982156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.982171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.982178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.982185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.982198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:36.992081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:36.992137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:36.992152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:36.992160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:36.992166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:36.992180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:38.895 [2024-07-15 14:16:37.002151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.895 [2024-07-15 14:16:37.002205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.895 [2024-07-15 14:16:37.002220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.895 [2024-07-15 14:16:37.002227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.895 [2024-07-15 14:16:37.002234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:38.895 [2024-07-15 14:16:37.002247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:38.895 qpair failed and we were unable to recover it. 00:30:39.157 [2024-07-15 14:16:37.012190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.157 [2024-07-15 14:16:37.012287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.157 [2024-07-15 14:16:37.012303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.157 [2024-07-15 14:16:37.012310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.157 [2024-07-15 14:16:37.012317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.157 [2024-07-15 14:16:37.012331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-07-15 14:16:37.022195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.157 [2024-07-15 14:16:37.022254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.022269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.022277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.022284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.022297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.032277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.032347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.032365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.032372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.032379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.032394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.042230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.042291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.042306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.042313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.042320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.042333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.052271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.052333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.052348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.052356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.052362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.052376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.062346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.062413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.062428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.062435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.062442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.062456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.072221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.072324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.072340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.072348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.072354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.072368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.082377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.082433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.082449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.082456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.082463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.082476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.092393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.092472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.092486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.092494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.092502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.092516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.102418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.102485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.102500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.102507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.102513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.102527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.112338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.112442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.112457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.112465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.112471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.112485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.122357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.122459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.122477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.122485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.122492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.122505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.132495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.132564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.132589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.132597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.132605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.132623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.142535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.142596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.142613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.142620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.142627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.142641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.152562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.152615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.152631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.152638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.152644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.158 [2024-07-15 14:16:37.152658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-07-15 14:16:37.162591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.158 [2024-07-15 14:16:37.162671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.158 [2024-07-15 14:16:37.162686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.158 [2024-07-15 14:16:37.162694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.158 [2024-07-15 14:16:37.162701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.162719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.172514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.172615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.172631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.172638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.172644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.172658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.182705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.182769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.182785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.182792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.182798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.182813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.192676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.192741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.192761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.192769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.192775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.192790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.202679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.202785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.202801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.202808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.202815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.202828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.212734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.212837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.212856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.212864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.212871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.212885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.222790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.222899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.222915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.222923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.222929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.222943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.232764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.232818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.232833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.232840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.232847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.232860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.242810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.242863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.242878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.242885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.242892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.242905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.252868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.252965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.252980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.252988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.252994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.253011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-07-15 14:16:37.262849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.159 [2024-07-15 14:16:37.262923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.159 [2024-07-15 14:16:37.262938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.159 [2024-07-15 14:16:37.262945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.159 [2024-07-15 14:16:37.262951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.159 [2024-07-15 14:16:37.262965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.272891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.272946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.272960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.272967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.272974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.272988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.282925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.282983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.282998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.283006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.283012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.283026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.292990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.293048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.293063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.293070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.293076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.293090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.302993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.303057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.303075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.303083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.303089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.303102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.313024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.313109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.313124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.313131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.313137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.313151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.323041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.323100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.323115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.323122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.323129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.323142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.333093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.333151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.333166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.333173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.333179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.333193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.343102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.343160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.343175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.343182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.343189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.422 [2024-07-15 14:16:37.343206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.422 qpair failed and we were unable to recover it. 00:30:39.422 [2024-07-15 14:16:37.353125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.422 [2024-07-15 14:16:37.353178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.422 [2024-07-15 14:16:37.353193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.422 [2024-07-15 14:16:37.353201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.422 [2024-07-15 14:16:37.353207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.353221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.363049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.363114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.363129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.363137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.363143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.363157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.373170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.373229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.373244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.373251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.373257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.373271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.383204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.383271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.383286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.383293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.383299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.383312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.393248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.393313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.393334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.393341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.393348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.393362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.403249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.403306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.403321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.403329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.403335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.403348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.413290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.413349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.413364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.413371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.413377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.413390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.423286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.423349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.423364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.423371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.423377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.423390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.433411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.433464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.433479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.433486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.433497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.433511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.443257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.443313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.443328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.443336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.443342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.443355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.453412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.453510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.453525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.453532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.453539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.453552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.463451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.463519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.463544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.463553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.463560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.463579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.473346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.473402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.473420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.473427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.473434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.473449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.483370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.483434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.483449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.483457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.483463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.483477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.493493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.493550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.423 [2024-07-15 14:16:37.493565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.423 [2024-07-15 14:16:37.493572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.423 [2024-07-15 14:16:37.493578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.423 [2024-07-15 14:16:37.493592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.423 qpair failed and we were unable to recover it. 00:30:39.423 [2024-07-15 14:16:37.503519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.423 [2024-07-15 14:16:37.503590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.424 [2024-07-15 14:16:37.503615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.424 [2024-07-15 14:16:37.503624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.424 [2024-07-15 14:16:37.503631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.424 [2024-07-15 14:16:37.503649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.424 qpair failed and we were unable to recover it. 00:30:39.424 [2024-07-15 14:16:37.513564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.424 [2024-07-15 14:16:37.513617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.424 [2024-07-15 14:16:37.513634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.424 [2024-07-15 14:16:37.513641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.424 [2024-07-15 14:16:37.513648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.424 [2024-07-15 14:16:37.513663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.424 qpair failed and we were unable to recover it. 00:30:39.424 [2024-07-15 14:16:37.523473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.424 [2024-07-15 14:16:37.523534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.424 [2024-07-15 14:16:37.523549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.424 [2024-07-15 14:16:37.523557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.424 [2024-07-15 14:16:37.523568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.424 [2024-07-15 14:16:37.523583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.424 qpair failed and we were unable to recover it. 00:30:39.424 [2024-07-15 14:16:37.533618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.424 [2024-07-15 14:16:37.533677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.424 [2024-07-15 14:16:37.533692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.424 [2024-07-15 14:16:37.533699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.424 [2024-07-15 14:16:37.533705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.424 [2024-07-15 14:16:37.533719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.424 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.543648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.543709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.543724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.543731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.543737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.543754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.553651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.553710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.553725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.553732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.553738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.553756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.563634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.563699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.563715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.563722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.563729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.563743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.573720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.573804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.573820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.573827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.573833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.573847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.583740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.583806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.583821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.583829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.583835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.583849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.593769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.593821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.593837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.593844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.593850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.593864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.603846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.603907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.603922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.603929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.603935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.603949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.613832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.613896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.613911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.613918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.613928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.613942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.623849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.623916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.623930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.623938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.623944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.623957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.633889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.690 [2024-07-15 14:16:37.633976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.690 [2024-07-15 14:16:37.633992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.690 [2024-07-15 14:16:37.633999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.690 [2024-07-15 14:16:37.634005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.690 [2024-07-15 14:16:37.634019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.690 qpair failed and we were unable to recover it. 00:30:39.690 [2024-07-15 14:16:37.643922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.643980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.643995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.644002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.644009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.644023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.653933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.653993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.654008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.654015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.654022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.654036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.663866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.663930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.663947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.663954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.663960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.663975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.674028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.674127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.674142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.674150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.674156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.674170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.683920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.683977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.683992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.683999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.684005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.684019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.694073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.694127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.694142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.694149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.694155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.694169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.704086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.704162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.704176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.704187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.704193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.704207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.713994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.714057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.714072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.714080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.714086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.714100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.724124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.724184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.724199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.724206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.724213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.724226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.734131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.734192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.734207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.734214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.734220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.734234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.744191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.744274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.744289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.744296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.744302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.744316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.754181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.754235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.754250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.754258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.754264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.754277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.764107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.764160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.764175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.764182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.764188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.764202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.774276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.774332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.774346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.774353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.691 [2024-07-15 14:16:37.774360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.691 [2024-07-15 14:16:37.774373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.691 qpair failed and we were unable to recover it. 00:30:39.691 [2024-07-15 14:16:37.784302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.691 [2024-07-15 14:16:37.784364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.691 [2024-07-15 14:16:37.784380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.691 [2024-07-15 14:16:37.784387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.692 [2024-07-15 14:16:37.784393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.692 [2024-07-15 14:16:37.784407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.692 qpair failed and we were unable to recover it. 00:30:39.692 [2024-07-15 14:16:37.794327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.692 [2024-07-15 14:16:37.794421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.692 [2024-07-15 14:16:37.794437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.692 [2024-07-15 14:16:37.794448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.692 [2024-07-15 14:16:37.794454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.692 [2024-07-15 14:16:37.794468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.692 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.804358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.804463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.804479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.804486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.804493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.804506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.814288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.814344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.814359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.814366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.814373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.814386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.824409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.824469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.824483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.824491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.824497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.824510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.834423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.834525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.834549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.834557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.834564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.834583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.844448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.844509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.844534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.844542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.844549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.844567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.854460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.854522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.854547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.854556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.854563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.854582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.988 [2024-07-15 14:16:37.864517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.988 [2024-07-15 14:16:37.864585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.988 [2024-07-15 14:16:37.864602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.988 [2024-07-15 14:16:37.864609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.988 [2024-07-15 14:16:37.864616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.988 [2024-07-15 14:16:37.864631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.988 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.874519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.874577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.874593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.874600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.874606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.874620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.884561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.884615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.884630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.884641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.884648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.884662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.894616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.894674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.894690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.894697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.894703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.894718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.904568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.904631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.904646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.904653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.904659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.904673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.914649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.914708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.914723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.914731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.914737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.914754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.924664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.924720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.924735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.924742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.924749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.924766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.934706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.934766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.934781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.934789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.934795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.934809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.944739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.944856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.944871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.944878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.944885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.944899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.954722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.954825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.954839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.954847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.954854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.954868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.964772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.964826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.964841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.964848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.964855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.964869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.974865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.974930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.974945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.974955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.974962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.974976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.984832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.984894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.984908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.984915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.984922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.984935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:37.994845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:37.994897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:37.994912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:37.994919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:37.994925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:37.994939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:38.004887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:38.004941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:38.004956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.989 [2024-07-15 14:16:38.004964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.989 [2024-07-15 14:16:38.004970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.989 [2024-07-15 14:16:38.004984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.989 qpair failed and we were unable to recover it. 00:30:39.989 [2024-07-15 14:16:38.015375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.989 [2024-07-15 14:16:38.015424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.989 [2024-07-15 14:16:38.015439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.015446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.015452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.015466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.024946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.025000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.025015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.025022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.025028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.025042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.034927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.034989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.035003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.035011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.035017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.035031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.045001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.045097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.045112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.045119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.045125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.045140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.054978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.055026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.055041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.055048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.055054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.055068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.065065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.065123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.065141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.065148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.065154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.065168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.075042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.075091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.075106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.075113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.075120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.075133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.085092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.085144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.085158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.085166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.085172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.085186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:39.990 [2024-07-15 14:16:38.095110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.990 [2024-07-15 14:16:38.095160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.990 [2024-07-15 14:16:38.095175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.990 [2024-07-15 14:16:38.095182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.990 [2024-07-15 14:16:38.095188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:39.990 [2024-07-15 14:16:38.095201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.990 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.105179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.105234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.105249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.105256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.105263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.105280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.253 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.115136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.115193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.115208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.115215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.115222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.115235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.253 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.125127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.125190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.125205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.125212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.125219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.125232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.253 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.135100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.135158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.135172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.135179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.135186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.135199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.253 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.145264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.145323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.145338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.145345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.145351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.145365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.253 qpair failed and we were unable to recover it. 00:30:40.253 [2024-07-15 14:16:38.155152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.253 [2024-07-15 14:16:38.155197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.253 [2024-07-15 14:16:38.155216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.253 [2024-07-15 14:16:38.155223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.253 [2024-07-15 14:16:38.155229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.253 [2024-07-15 14:16:38.155243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.165324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.165379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.165395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.165402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.165408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.165421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.175338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.175388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.175403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.175410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.175417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.175431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.185384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.185470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.185485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.185493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.185499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.185512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.195241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.195295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.195310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.195317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.195324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.195344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.205434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.205497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.205512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.205519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.205525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.205538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.215435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.215496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.215520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.215529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.215536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.215556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.225418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.225518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.225536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.225544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.225550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.225565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.235470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.235556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.235572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.235579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.235586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.235601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.245533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.245589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.245609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.245616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.245622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.245636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.255535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.255597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.255612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.255619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.255626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.255640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.265625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.265693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.265708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.265716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.265722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.265736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.275599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.275646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.275662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.275669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.275675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.275689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.285641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.285696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.285712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.285720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.285726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.285743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.295512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.295565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.295580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.295588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.295594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.295609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.305703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.305760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.305775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.305782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.305789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.305803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.315695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.315759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.315774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.315782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.315788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.315802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.325760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.325813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.325828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.325835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.325842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.325855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.254 [2024-07-15 14:16:38.335739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.254 [2024-07-15 14:16:38.335796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.254 [2024-07-15 14:16:38.335814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.254 [2024-07-15 14:16:38.335821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.254 [2024-07-15 14:16:38.335828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.254 [2024-07-15 14:16:38.335841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.254 qpair failed and we were unable to recover it. 00:30:40.255 [2024-07-15 14:16:38.345825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-07-15 14:16:38.345880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-07-15 14:16:38.345895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-07-15 14:16:38.345903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-07-15 14:16:38.345909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.255 [2024-07-15 14:16:38.345922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-07-15 14:16:38.355801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-07-15 14:16:38.355901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-07-15 14:16:38.355916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-07-15 14:16:38.355923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-07-15 14:16:38.355929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.255 [2024-07-15 14:16:38.355943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-07-15 14:16:38.365890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-07-15 14:16:38.365951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-07-15 14:16:38.365966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-07-15 14:16:38.365973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-07-15 14:16:38.365979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.255 [2024-07-15 14:16:38.365993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.375842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.375931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.375945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.375952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.375963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.375977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.385943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.386002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.386017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.386024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.386031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.386044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.395925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.395974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.395989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.395996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.396002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.396016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.405994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.406046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.406061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.406069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.406075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.406089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.416023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.416073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.416089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.416096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.416102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.416117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.426065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.426145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.426161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.426168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.426175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.426190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.436030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.436124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.436139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.436147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.436153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.436167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.446073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.446145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.446159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.446167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.446175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.446188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.455957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.456008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.456023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.456030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.456036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.456050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.466149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.466239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.466255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.466262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.466272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.466285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.476137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.476198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-07-15 14:16:38.476212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-07-15 14:16:38.476220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-07-15 14:16:38.476226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.518 [2024-07-15 14:16:38.476240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-07-15 14:16:38.486196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-07-15 14:16:38.486250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.486264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.486271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.486278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.486291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.496190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.496237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.496252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.496259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.496265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.496279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.506266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.506327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.506342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.506349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.506355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.506368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.516251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.516304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.516320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.516327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.516333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.516346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.526362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.526416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.526431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.526438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.526445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.526458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.536483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.536530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.536545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.536552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.536558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.536572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.546370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.546428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.546443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.546450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.546456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.546470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.556347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.556402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.556417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.556425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.556434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.556449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.566424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.566477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.566492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.566499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.566506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.566520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.576420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.576474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.576499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.576507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.576514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.576533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.586477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.586540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.586556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.586564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.586570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.586586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.596495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.596551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.596576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.596585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.596592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.596611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.606492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.606552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.606577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.606585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.606592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.606612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.616533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.616583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.616600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.616608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.616614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.616631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-07-15 14:16:38.626592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-07-15 14:16:38.626654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-07-15 14:16:38.626670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-07-15 14:16:38.626677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-07-15 14:16:38.626684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.519 [2024-07-15 14:16:38.626698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-15 14:16:38.636565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.782 [2024-07-15 14:16:38.636626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.782 [2024-07-15 14:16:38.636641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.782 [2024-07-15 14:16:38.636649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.782 [2024-07-15 14:16:38.636655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.782 [2024-07-15 14:16:38.636670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-15 14:16:38.646615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.782 [2024-07-15 14:16:38.646704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.782 [2024-07-15 14:16:38.646720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.782 [2024-07-15 14:16:38.646732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.782 [2024-07-15 14:16:38.646739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.782 [2024-07-15 14:16:38.646757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-15 14:16:38.656629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.782 [2024-07-15 14:16:38.656678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.782 [2024-07-15 14:16:38.656694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.782 [2024-07-15 14:16:38.656701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.782 [2024-07-15 14:16:38.656707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.782 [2024-07-15 14:16:38.656721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-15 14:16:38.666667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.782 [2024-07-15 14:16:38.666732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.782 [2024-07-15 14:16:38.666747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.782 [2024-07-15 14:16:38.666758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.666764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.666778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.676552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.676608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.676624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.676631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.676637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.676651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.686682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.686735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.686754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.686762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.686768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.686782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.696744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.696802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.696817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.696825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.696831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.696845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.706851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.706923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.706938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.706945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.706952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.706966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.716778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.716830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.716846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.716853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.716859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.716874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.726857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.726910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.726926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.726933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.726939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.726954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.736826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.736875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.736890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.736901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.736908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.736922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.746902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.746964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.746979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.746986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.746992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.747006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.756901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.756993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.757009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.757017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.757024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.757037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.766924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.767024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.767040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.767047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.767054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.767067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.776836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.776886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.776901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.776908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.776914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.776928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.787058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.787111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.787126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.787133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.787140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.787154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.796904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.796957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.796973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.796981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.783 [2024-07-15 14:16:38.796987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.783 [2024-07-15 14:16:38.797002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-15 14:16:38.807067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.783 [2024-07-15 14:16:38.807116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.783 [2024-07-15 14:16:38.807131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.783 [2024-07-15 14:16:38.807138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.807145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.807159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.817124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.817175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.817190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.817197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.817203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.817217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.826981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.827033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.827047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.827058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.827065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.827078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.837112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.837208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.837224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.837231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.837238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.837251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.847077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.847133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.847149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.847156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.847162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.847176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.857164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.857213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.857229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.857237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.857243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.857257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.867207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.867342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.867357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.867364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.867370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.867384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.877230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.877280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.877295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.877302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.877308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.877322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-15 14:16:38.887258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.784 [2024-07-15 14:16:38.887303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.784 [2024-07-15 14:16:38.887318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.784 [2024-07-15 14:16:38.887325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.784 [2024-07-15 14:16:38.887332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:40.784 [2024-07-15 14:16:38.887345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.784 qpair failed and we were unable to recover it. 00:30:41.047 [2024-07-15 14:16:38.897285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.047 [2024-07-15 14:16:38.897332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.047 [2024-07-15 14:16:38.897347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.047 [2024-07-15 14:16:38.897354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.047 [2024-07-15 14:16:38.897361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.047 [2024-07-15 14:16:38.897374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.047 qpair failed and we were unable to recover it. 00:30:41.047 [2024-07-15 14:16:38.907290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.047 [2024-07-15 14:16:38.907341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.047 [2024-07-15 14:16:38.907356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.047 [2024-07-15 14:16:38.907363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.047 [2024-07-15 14:16:38.907369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.047 [2024-07-15 14:16:38.907383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.047 qpair failed and we were unable to recover it. 00:30:41.047 [2024-07-15 14:16:38.917300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.047 [2024-07-15 14:16:38.917366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.047 [2024-07-15 14:16:38.917381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.047 [2024-07-15 14:16:38.917393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.047 [2024-07-15 14:16:38.917399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.047 [2024-07-15 14:16:38.917413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.047 qpair failed and we were unable to recover it. 00:30:41.047 [2024-07-15 14:16:38.927367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.927415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.927432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.927440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.927448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.927462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.937397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.937454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.937479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.937487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.937495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.937514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.947419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.947481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.947506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.947515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.947521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.947541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.957442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.957496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.957513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.957521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.957528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.957542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.967469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.967518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.967534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.967541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.967547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.967562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.977511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.977564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.977579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.977586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.977593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.977607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.987432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.987542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.987568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.987576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.987583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.987601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:38.997596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:38.997674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:38.997691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:38.997699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:38.997706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:38.997720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.007577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.007627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.007650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.007658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.007664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.007679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.017609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.017662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.017677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.017685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.017691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.017705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.027661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.027715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.027730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.027738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.027744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.027761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.037668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.037719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.037734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.037741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.037747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.037764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.047695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.047750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.047769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.047776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.047782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.047800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.057733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.057788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.057803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.057810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.048 [2024-07-15 14:16:39.057817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.048 [2024-07-15 14:16:39.057831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.048 qpair failed and we were unable to recover it. 00:30:41.048 [2024-07-15 14:16:39.067760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.048 [2024-07-15 14:16:39.067811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.048 [2024-07-15 14:16:39.067825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.048 [2024-07-15 14:16:39.067832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.067839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.067853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.077700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.077790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.077806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.077813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.077819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.077834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.087801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.087851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.087866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.087873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.087880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.087893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.097808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.097858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.097876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.097884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.097890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.097904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.107861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.107914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.107929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.107936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.107943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.107957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.117816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.117864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.117878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.117886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.117892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.117906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.127892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.127946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.127961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.127968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.127974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.127988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.137908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.137956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.137971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.137978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.137984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.138002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.148017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.148074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.148090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.148097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.148103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.148117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.049 [2024-07-15 14:16:39.157995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.049 [2024-07-15 14:16:39.158046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.049 [2024-07-15 14:16:39.158061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.049 [2024-07-15 14:16:39.158068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.049 [2024-07-15 14:16:39.158074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.049 [2024-07-15 14:16:39.158088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.049 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.167895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.167944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.167959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.167966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.167972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.167986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.177923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.177970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.177985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.177993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.177999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.178012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.188060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.188114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.188131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.188139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.188145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.188158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.198137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.198193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.198207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.198214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.198221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.198234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.208135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.208182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.208196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.208204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.208210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.208224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.218143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.218196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.218212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.218219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.218229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.218243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.228176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.228241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.228256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.228263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.228269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.228287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.238192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.238244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.238259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.238266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.238272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.312 [2024-07-15 14:16:39.238286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.312 qpair failed and we were unable to recover it. 00:30:41.312 [2024-07-15 14:16:39.248272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.312 [2024-07-15 14:16:39.248326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.312 [2024-07-15 14:16:39.248341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.312 [2024-07-15 14:16:39.248348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.312 [2024-07-15 14:16:39.248355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.248368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.258285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.258334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.258349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.258357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.258363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.258377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.268285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.268340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.268354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.268362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.268368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.268382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.278326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.278377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.278395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.278403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.278409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.278423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.288394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.288444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.288459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.288466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.288473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.288486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.298362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.298409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.298424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.298431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.298437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.298451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.308293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.308358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.308374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.308383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.308393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.308408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.318436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.318484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.318500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.318507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.318517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.318531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.328472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.328524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.328539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.328546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.328552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.328566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.338485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.338537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.338552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.338559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.338566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.338579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.348504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.348564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.348579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.348586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.348592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.348606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.358411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.358457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.358471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.358479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.358485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.358499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.368563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.368653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.368669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.368676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.368683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.368697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.378595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.378645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.378660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.378667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.378674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.378687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.388636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.388692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.313 [2024-07-15 14:16:39.388707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.313 [2024-07-15 14:16:39.388714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.313 [2024-07-15 14:16:39.388721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.313 [2024-07-15 14:16:39.388735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.313 qpair failed and we were unable to recover it. 00:30:41.313 [2024-07-15 14:16:39.398660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.313 [2024-07-15 14:16:39.398707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.314 [2024-07-15 14:16:39.398722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.314 [2024-07-15 14:16:39.398729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.314 [2024-07-15 14:16:39.398736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.314 [2024-07-15 14:16:39.398749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.314 qpair failed and we were unable to recover it. 00:30:41.314 [2024-07-15 14:16:39.408690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.314 [2024-07-15 14:16:39.408770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.314 [2024-07-15 14:16:39.408786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.314 [2024-07-15 14:16:39.408794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.314 [2024-07-15 14:16:39.408803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.314 [2024-07-15 14:16:39.408818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.314 qpair failed and we were unable to recover it. 00:30:41.314 [2024-07-15 14:16:39.418709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.314 [2024-07-15 14:16:39.418761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.314 [2024-07-15 14:16:39.418777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.314 [2024-07-15 14:16:39.418784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.314 [2024-07-15 14:16:39.418790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.314 [2024-07-15 14:16:39.418804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.314 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.428737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.428795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.428810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.428817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.428824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.428837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.438740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.438829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.438845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.438852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.438859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.438873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.448841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.448931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.448947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.448954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.448961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.448976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.458817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.458870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.458884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.458892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.458898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.458912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.468854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.468945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.468960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.468968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.468974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.468988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.478755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.478803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.478817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.478825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.478831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.478844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.488894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.488946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.488961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.488968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.488975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.488988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.498924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.498997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.499011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.499018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.499029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.577 [2024-07-15 14:16:39.499042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.577 qpair failed and we were unable to recover it. 00:30:41.577 [2024-07-15 14:16:39.508958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.577 [2024-07-15 14:16:39.509047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.577 [2024-07-15 14:16:39.509063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.577 [2024-07-15 14:16:39.509070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.577 [2024-07-15 14:16:39.509076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.509091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.518845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.518901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.518916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.518923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.518930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.518944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.528984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.529033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.529048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.529055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.529061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.529075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.539032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.539083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.539098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.539105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.539111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.539125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.549080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.549135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.549150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.549157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.549163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.549177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.559051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.559102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.559120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.559127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.559133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.559149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.568990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.569038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.569053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.569060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.569067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.569082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.579141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.579189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.579204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.579211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.579217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.579231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.589163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.589217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.589232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.589243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.589249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.589263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.599080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.599135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.599150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.599157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.599163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.599177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.609220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.609282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.609297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.609304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.609310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.609324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.619133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.619183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.619198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.619205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.619211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.619225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.629290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.629345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.629359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.629366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.629373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.629386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.639290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.639390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.639405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.639413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.639419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.639433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.649333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.578 [2024-07-15 14:16:39.649383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.578 [2024-07-15 14:16:39.649398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.578 [2024-07-15 14:16:39.649405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.578 [2024-07-15 14:16:39.649411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.578 [2024-07-15 14:16:39.649425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.578 qpair failed and we were unable to recover it. 00:30:41.578 [2024-07-15 14:16:39.659340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.579 [2024-07-15 14:16:39.659388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.579 [2024-07-15 14:16:39.659403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.579 [2024-07-15 14:16:39.659410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.579 [2024-07-15 14:16:39.659416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.579 [2024-07-15 14:16:39.659430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.579 qpair failed and we were unable to recover it. 00:30:41.579 [2024-07-15 14:16:39.669422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.579 [2024-07-15 14:16:39.669477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.579 [2024-07-15 14:16:39.669492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.579 [2024-07-15 14:16:39.669499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.579 [2024-07-15 14:16:39.669505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.579 [2024-07-15 14:16:39.669519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.579 qpair failed and we were unable to recover it. 00:30:41.579 [2024-07-15 14:16:39.679398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.579 [2024-07-15 14:16:39.679446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.579 [2024-07-15 14:16:39.679461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.579 [2024-07-15 14:16:39.679471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.579 [2024-07-15 14:16:39.679478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.579 [2024-07-15 14:16:39.679491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.579 qpair failed and we were unable to recover it. 00:30:41.579 [2024-07-15 14:16:39.689339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.579 [2024-07-15 14:16:39.689398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.579 [2024-07-15 14:16:39.689413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.579 [2024-07-15 14:16:39.689421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.579 [2024-07-15 14:16:39.689427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.579 [2024-07-15 14:16:39.689440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.579 qpair failed and we were unable to recover it. 00:30:41.841 [2024-07-15 14:16:39.699451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.841 [2024-07-15 14:16:39.699502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.841 [2024-07-15 14:16:39.699517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.841 [2024-07-15 14:16:39.699524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.841 [2024-07-15 14:16:39.699531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.841 [2024-07-15 14:16:39.699544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.841 qpair failed and we were unable to recover it. 00:30:41.841 [2024-07-15 14:16:39.709482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.841 [2024-07-15 14:16:39.709581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.841 [2024-07-15 14:16:39.709598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.841 [2024-07-15 14:16:39.709605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.841 [2024-07-15 14:16:39.709612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.841 [2024-07-15 14:16:39.709626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.841 qpair failed and we were unable to recover it. 00:30:41.841 [2024-07-15 14:16:39.719483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.841 [2024-07-15 14:16:39.719536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.841 [2024-07-15 14:16:39.719551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.719559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.719565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.719579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.729529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.729577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.729592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.729600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.729606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.729620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.739537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.739586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.739601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.739608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.739615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.739628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.749582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.749640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.749656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.749663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.749669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.749683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.759586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.759632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.759647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.759654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.759661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.759674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.769651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.769702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.769717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.769728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.769735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.769749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.779668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.779766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.779781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.779789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.779796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.779810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.789561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.789611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.789626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.789633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.789640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.789653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.799592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.799646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.799661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.799668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.799674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.799688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.809627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.809680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.809695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.809702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.809708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.809722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.819772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.819826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.819842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.819850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.819856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.819870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.829788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.829846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.829861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.829868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.829875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.829888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.839821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.839971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.839987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.839994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.840001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.840015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.849851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.849900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.849915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.842 [2024-07-15 14:16:39.849922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.842 [2024-07-15 14:16:39.849928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.842 [2024-07-15 14:16:39.849942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.842 qpair failed and we were unable to recover it. 00:30:41.842 [2024-07-15 14:16:39.859767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.842 [2024-07-15 14:16:39.859817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.842 [2024-07-15 14:16:39.859831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.859842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.859849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.859862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.869899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.869956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.869971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.869978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.869984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.869998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.879927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.879975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.879990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.879998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.880004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.880018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.889964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.890014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.890029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.890036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.890042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.890057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.899903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.899951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.899966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.899974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.899980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.899994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.910025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.910075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.910091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.910098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.910104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.910118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.920004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.920100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.920116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.920123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.920130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.920143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.929971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.930023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.930038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.930046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.930052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.930066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.940101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.940154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.940169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.940176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.940182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.940196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:41.843 [2024-07-15 14:16:39.950126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.843 [2024-07-15 14:16:39.950180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.843 [2024-07-15 14:16:39.950199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.843 [2024-07-15 14:16:39.950206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.843 [2024-07-15 14:16:39.950213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:41.843 [2024-07-15 14:16:39.950226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.843 qpair failed and we were unable to recover it. 00:30:42.106 [2024-07-15 14:16:39.960125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.106 [2024-07-15 14:16:39.960179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.106 [2024-07-15 14:16:39.960195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:39.960202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:39.960208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:39.960221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:39.970168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:39.970217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:39.970232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:39.970239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:39.970246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:39.970259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:39.980207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:39.980256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:39.980271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:39.980278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:39.980284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:39.980298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:39.990222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:39.990286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:39.990301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:39.990308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:39.990314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:39.990328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.000254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.000345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.000361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.000368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.000375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.000389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.010304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.010357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.010379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.010387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.010394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.010411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.020319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.020370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.020387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.020395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.020401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.020417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.030358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.030414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.030431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.030439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.030445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.030460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.040394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.040444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.040463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.040471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.040477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.040492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.050416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.050517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.050535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.050543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.050549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.050565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.060423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.060475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.060491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.060498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.060505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.060519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.070464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.070520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.070537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.070544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.070551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.070566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.080489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.080537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.080553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.080560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.080567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.080585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.090387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.090437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.090453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.090461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.090468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.107 [2024-07-15 14:16:40.090482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.107 qpair failed and we were unable to recover it. 00:30:42.107 [2024-07-15 14:16:40.100530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.107 [2024-07-15 14:16:40.100581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.107 [2024-07-15 14:16:40.100596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.107 [2024-07-15 14:16:40.100604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.107 [2024-07-15 14:16:40.100610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.100624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.110615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.110697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.110712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.110720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.110727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.110741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.120595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.120647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.120663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.120670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.120677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.120690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.130616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.130666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.130685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.130692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.130699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.130712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.140637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.140690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.140705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.140712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.140719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.140732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.150659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.150712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.150727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.150735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.150741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.150758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.160674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.160721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.160736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.160743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.160750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.160767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.170584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.170634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.170650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.170657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.170663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.170681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.180606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.180658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.180673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.180681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.180687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.180701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.190725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.190789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.190804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.190811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.190818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.190831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.200796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.200849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.200866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.200873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.200882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.200897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.108 [2024-07-15 14:16:40.210797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.108 [2024-07-15 14:16:40.210846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.108 [2024-07-15 14:16:40.210862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.108 [2024-07-15 14:16:40.210869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.108 [2024-07-15 14:16:40.210876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.108 [2024-07-15 14:16:40.210890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.108 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.220864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.220918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.220936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.220944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.220950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.220964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.230897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.230955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.230970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.230977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.230984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.230998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.240895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.240954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.240969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.240976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.240983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.240996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.250932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.250982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.250997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.251005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.251011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.251025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.260832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.260882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.260897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.260904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.260914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.260927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.270965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.271022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.271038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.271045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.271051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.271065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.281023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.281075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.281090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.281098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.281104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.281117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.291050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.291100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.291115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.291122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.291128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.291141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.301053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.301107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.301122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.301130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.301136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.301149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.311094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.311148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.311167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.311174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.311180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.311194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.321119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.321169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.321184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.321191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.321197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.321211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.331014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.331064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.331079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.331086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.331092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.331106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.341048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.372 [2024-07-15 14:16:40.341100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.372 [2024-07-15 14:16:40.341116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.372 [2024-07-15 14:16:40.341123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.372 [2024-07-15 14:16:40.341129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.372 [2024-07-15 14:16:40.341143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.372 qpair failed and we were unable to recover it. 00:30:42.372 [2024-07-15 14:16:40.351240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.351333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.351348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.351356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.351366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.351379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.361227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.361285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.361301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.361309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.361318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.361332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.371261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.371314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.371330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.371337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.371343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.371357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.381301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.381354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.381369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.381376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.381382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.381396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.391183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.391277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.391292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.391299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.391306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.391320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.401336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.401395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.401410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.401417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.401423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.401437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.411361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.411412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.411429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.411436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.411443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.411457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.421388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.421441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.421457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.421464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.421471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.421484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.431420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.431487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.431512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.431520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.431528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.431547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.441471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.441520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.441538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.441545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.441556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.441572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.451449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.451507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.451532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.451541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.451548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.451567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.461395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.461454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.461479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.461488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.461495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.461514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.471539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.471604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.471629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.471637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.471644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.471663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.373 [2024-07-15 14:16:40.481618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.373 [2024-07-15 14:16:40.481685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.373 [2024-07-15 14:16:40.481702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.373 [2024-07-15 14:16:40.481710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.373 [2024-07-15 14:16:40.481716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.373 [2024-07-15 14:16:40.481731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.373 qpair failed and we were unable to recover it. 00:30:42.636 [2024-07-15 14:16:40.491639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.636 [2024-07-15 14:16:40.491720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.636 [2024-07-15 14:16:40.491736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.491743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.491749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.491768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.501610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.501673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.501689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.501696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.501703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.501717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.511621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.511686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.511701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.511709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.511715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.511729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.521538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.521590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.521605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.521613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.521619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.521633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.531681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.531730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.531745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.531755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.531767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.531781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.541600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.541650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.541667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.541674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.541681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.541695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.551727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.551786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.551802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.551810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.551816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.551830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.561770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.561821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.561837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.561845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.561851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.561865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.571778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.571836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.571852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.571859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.571865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.571879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.581818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.581869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.581885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.581892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.581899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.581912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.591868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.591918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.591933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.591940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.591947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.591961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.601844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.601895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.601910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.601917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.601924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.601938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.611901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.611953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.611968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.611975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.611982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.611995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.621922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.621973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.621989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.622000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.622006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.637 [2024-07-15 14:16:40.622020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.637 qpair failed and we were unable to recover it. 00:30:42.637 [2024-07-15 14:16:40.631965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.637 [2024-07-15 14:16:40.632019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.637 [2024-07-15 14:16:40.632033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.637 [2024-07-15 14:16:40.632040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.637 [2024-07-15 14:16:40.632046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.632060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.642001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.642049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.642064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.642071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.642077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.642091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.652002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.652053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.652068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.652075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.652081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.652095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.662038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.662088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.662103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.662110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.662116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.662129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.671951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.672008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.672023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.672030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.672037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.672050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.682064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.682115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.682130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.682137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.682143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.682157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.692113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.692165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.692180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.692187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.692193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.692207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.702166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.702221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.702235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.702242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.702249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.702262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.712175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.712282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.712298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.712309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.712315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.712330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.722195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.722299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.722315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.722322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.722328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.722342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.732221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.732275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.732290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.732298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.732304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.732317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.638 [2024-07-15 14:16:40.742231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.638 [2024-07-15 14:16:40.742280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.638 [2024-07-15 14:16:40.742295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.638 [2024-07-15 14:16:40.742302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.638 [2024-07-15 14:16:40.742309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.638 [2024-07-15 14:16:40.742322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.638 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.752290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.752387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.752402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.752409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.752415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.752429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.762364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.762417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.762432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.762439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.762446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.762459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.772245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.772302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.772317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.772324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.772330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.772344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.782372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.782426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.782441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.782448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.782454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.782468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.792399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.792487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.792502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.792509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.792515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.792529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.802425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.802475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.802490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.802501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.802507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.802520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.812415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.812494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.812508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.812516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.812522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.812536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.822484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.822530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.822545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.822553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.822559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.822573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.832473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.832531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.832546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.832553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.902 [2024-07-15 14:16:40.832560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.902 [2024-07-15 14:16:40.832573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.902 qpair failed and we were unable to recover it. 00:30:42.902 [2024-07-15 14:16:40.842543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.902 [2024-07-15 14:16:40.842595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.902 [2024-07-15 14:16:40.842610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.902 [2024-07-15 14:16:40.842617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.842623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.842637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.852563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.852631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.852646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.852653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.852659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.852673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.862462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.862515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.862530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.862537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.862543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.862557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.872626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.872682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.872697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.872705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.872711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.872725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.882510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.882559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.882575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.882583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.882589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.882604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.892651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.892709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.892727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.892734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.892741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.892758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.902693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.902742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.902761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.902769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.902775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.902789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.912583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.912633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.912647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.912655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.912661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.912675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.922743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.922794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.922810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.922817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.922823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.922837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.932771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.932829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.932844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.932851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.932857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.932871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.942798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.942849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.942864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.942871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.942877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.942891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.952827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.952885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.952900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.952907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.952913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.952927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.962721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.962774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.962790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.962797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.962803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.962817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.972885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.972934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.972948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.972956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.972962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.972975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.982899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.903 [2024-07-15 14:16:40.982947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.903 [2024-07-15 14:16:40.982965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.903 [2024-07-15 14:16:40.982973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.903 [2024-07-15 14:16:40.982979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.903 [2024-07-15 14:16:40.982992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.903 qpair failed and we were unable to recover it. 00:30:42.903 [2024-07-15 14:16:40.992891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.904 [2024-07-15 14:16:40.992945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.904 [2024-07-15 14:16:40.992960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.904 [2024-07-15 14:16:40.992967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.904 [2024-07-15 14:16:40.992974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.904 [2024-07-15 14:16:40.992987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.904 qpair failed and we were unable to recover it. 00:30:42.904 [2024-07-15 14:16:41.002954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.904 [2024-07-15 14:16:41.003005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.904 [2024-07-15 14:16:41.003020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.904 [2024-07-15 14:16:41.003027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.904 [2024-07-15 14:16:41.003033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.904 [2024-07-15 14:16:41.003046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.904 qpair failed and we were unable to recover it. 00:30:42.904 [2024-07-15 14:16:41.012983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.904 [2024-07-15 14:16:41.013080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.904 [2024-07-15 14:16:41.013095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.904 [2024-07-15 14:16:41.013103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.904 [2024-07-15 14:16:41.013109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:42.904 [2024-07-15 14:16:41.013123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.904 qpair failed and we were unable to recover it. 00:30:43.166 [2024-07-15 14:16:41.023038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.166 [2024-07-15 14:16:41.023090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.166 [2024-07-15 14:16:41.023105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.166 [2024-07-15 14:16:41.023112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.166 [2024-07-15 14:16:41.023118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.166 [2024-07-15 14:16:41.023135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.166 qpair failed and we were unable to recover it. 00:30:43.166 [2024-07-15 14:16:41.033048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.166 [2024-07-15 14:16:41.033103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.166 [2024-07-15 14:16:41.033117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.166 [2024-07-15 14:16:41.033124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.166 [2024-07-15 14:16:41.033130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.166 [2024-07-15 14:16:41.033144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.043040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.043092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.043107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.043115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.043121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.043134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.053092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.053155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.053171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.053178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.053184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.053197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.063128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.063177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.063191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.063198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.063205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.063218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.073145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.073203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.073221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.073228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.073235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.073248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.083166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.083214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.083228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.083236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.083242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.083255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.093191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.093245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.093261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.093268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.093274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.093287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.103097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.103149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.103163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.103171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.103177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.103190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.113203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.113257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.113272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.113279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.113285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.113302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.123281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.123334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.123349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.123356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.123362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.123376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.133306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.133355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.133370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.133377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.133383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.133396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.143289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.143342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.143357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.143364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.143370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.143383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.153351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.153407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.153421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.153429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.153435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.153448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.163381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.163433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.163452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.163459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.163465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.163478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.173459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.173514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.167 [2024-07-15 14:16:41.173529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.167 [2024-07-15 14:16:41.173536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.167 [2024-07-15 14:16:41.173542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.167 [2024-07-15 14:16:41.173556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.167 qpair failed and we were unable to recover it. 00:30:43.167 [2024-07-15 14:16:41.183422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.167 [2024-07-15 14:16:41.183479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.183504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.183512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.183519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.183537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.193451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.193517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.193536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.193545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.193551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.193567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.203405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.203466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.203483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.203491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.203497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.203516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.213513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.213573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.213597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.213606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.213613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.213632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.223540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.223633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.223651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.223658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.223665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.223679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.233578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.233653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.233669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.233676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.233683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.233697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.243595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.243646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.243661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.243669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.243675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xad9a50 00:30:43.168 [2024-07-15 14:16:41.243689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.253625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.253817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.253893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.253919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.253939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9894000b90 00:30:43.168 [2024-07-15 14:16:41.253991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.263649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.168 [2024-07-15 14:16:41.263731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.168 [2024-07-15 14:16:41.263775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.168 [2024-07-15 14:16:41.263791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.168 [2024-07-15 14:16:41.263805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9894000b90 00:30:43.168 [2024-07-15 14:16:41.263836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.168 qpair failed and we were unable to recover it. 00:30:43.168 [2024-07-15 14:16:41.264022] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:43.168 A controller has encountered a failure and is being reset. 00:30:43.168 [2024-07-15 14:16:41.264139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7800 (9): Bad file descriptor 00:30:43.430 Controller properly reset. 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Write completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 Read completed with error (sct=0, sc=8) 00:30:43.430 starting I/O failed 00:30:43.430 [2024-07-15 14:16:41.405012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.430 Initializing NVMe Controllers 00:30:43.430 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:43.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:43.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:43.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:43.430 Initialization complete. Launching workers. 00:30:43.430 Starting thread on core 1 00:30:43.430 Starting thread on core 2 00:30:43.430 Starting thread on core 3 00:30:43.430 Starting thread on core 0 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:43.430 00:30:43.430 real 0m11.584s 00:30:43.430 user 0m21.602s 00:30:43.430 sys 0m3.542s 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:43.430 ************************************ 00:30:43.430 END TEST nvmf_target_disconnect_tc2 00:30:43.430 ************************************ 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.430 rmmod nvme_tcp 00:30:43.430 rmmod nvme_fabrics 00:30:43.430 rmmod nvme_keyring 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1574195 ']' 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1574195 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1574195 ']' 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1574195 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:43.430 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1574195 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1574195' 00:30:43.692 killing process with pid 1574195 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1574195 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1574195 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.692 14:16:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.252 14:16:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:46.252 00:30:46.252 real 0m22.550s 00:30:46.252 user 0m50.345s 00:30:46.252 sys 0m10.026s 00:30:46.252 14:16:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.252 14:16:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:46.252 ************************************ 00:30:46.252 END TEST nvmf_target_disconnect 00:30:46.252 ************************************ 00:30:46.252 14:16:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:46.252 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:46.252 14:16:43 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.252 14:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.252 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:46.252 00:30:46.252 real 23m16.912s 00:30:46.252 user 47m19.432s 00:30:46.252 sys 7m32.999s 00:30:46.252 14:16:43 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.252 14:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.252 ************************************ 00:30:46.252 END TEST nvmf_tcp 00:30:46.252 ************************************ 00:30:46.252 14:16:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:46.252 14:16:43 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:46.252 14:16:43 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:46.252 14:16:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:46.252 14:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.252 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:30:46.252 ************************************ 00:30:46.252 START TEST spdkcli_nvmf_tcp 00:30:46.252 ************************************ 00:30:46.252 14:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:46.252 * Looking for test storage... 00:30:46.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1576172 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1576172 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1576172 ']' 00:30:46.252 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.253 14:16:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:46.253 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.253 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.253 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.253 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.253 [2024-07-15 14:16:44.149284] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:46.253 [2024-07-15 14:16:44.149342] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576172 ] 00:30:46.253 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.253 [2024-07-15 14:16:44.214910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:46.253 [2024-07-15 14:16:44.280540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.253 [2024-07-15 14:16:44.280542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.824 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.824 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:46.824 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:46.824 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.824 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.085 14:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:47.085 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:47.085 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:47.085 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:47.085 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:47.085 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:47.085 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:47.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:47.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:47.085 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:47.085 ' 00:30:49.626 [2024-07-15 14:16:47.273390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.566 [2024-07-15 14:16:48.437238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:52.475 [2024-07-15 14:16:50.575750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:54.388 [2024-07-15 14:16:52.413139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:55.775 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:55.775 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:55.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:55.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:55.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:55.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:55.775 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:56.035 14:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.296 14:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:56.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:56.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:56.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:56.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:56.297 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:56.297 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:56.297 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:56.297 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:56.297 ' 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:01.586 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:01.586 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:01.586 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:01.586 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:01.586 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:01.587 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:01.587 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:01.587 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:01.587 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1576172 ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576172' 00:31:01.587 killing process with pid 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1576172 ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1576172 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1576172 ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1576172 00:31:01.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1576172) - No such process 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1576172 is not found' 00:31:01.587 Process with pid 1576172 is not found 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:01.587 00:31:01.587 real 0m15.531s 00:31:01.587 user 0m31.954s 00:31:01.587 sys 0m0.713s 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:01.587 14:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.587 ************************************ 00:31:01.587 END TEST spdkcli_nvmf_tcp 00:31:01.587 ************************************ 00:31:01.587 14:16:59 -- common/autotest_common.sh@1142 -- # return 0 00:31:01.587 14:16:59 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:01.587 14:16:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:01.587 14:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.587 14:16:59 -- common/autotest_common.sh@10 -- # set +x 00:31:01.587 ************************************ 00:31:01.587 START TEST nvmf_identify_passthru 00:31:01.587 ************************************ 00:31:01.587 14:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:01.587 * Looking for test storage... 00:31:01.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:01.587 14:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.587 14:16:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.587 14:16:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.587 14:16:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.587 14:16:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.587 14:16:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.587 14:16:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.587 14:16:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:01.587 14:16:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:01.587 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:01.587 14:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.850 14:16:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.850 14:16:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.850 14:16:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.850 14:16:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.850 14:16:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.850 14:16:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.850 14:16:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:01.850 14:16:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.850 14:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.850 14:16:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:01.850 14:16:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:01.850 14:16:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:01.850 14:16:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.026 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.027 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.027 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.027 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.027 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:10.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:31:10.027 00:31:10.027 --- 10.0.0.2 ping statistics --- 00:31:10.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.027 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:31:10.027 00:31:10.027 --- 10.0.0.1 ping statistics --- 00:31:10.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.027 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.027 14:17:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:31:10.027 14:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:10.027 14:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:10.027 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.289 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:31:10.289 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:10.289 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:10.289 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:10.550 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1583529 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:10.811 14:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1583529 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1583529 ']' 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:10.811 14:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:11.071 [2024-07-15 14:17:08.944529] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:11.071 [2024-07-15 14:17:08.944589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.071 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.071 [2024-07-15 14:17:09.019083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.071 [2024-07-15 14:17:09.087174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.071 [2024-07-15 14:17:09.087212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.071 [2024-07-15 14:17:09.087220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.071 [2024-07-15 14:17:09.087226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.071 [2024-07-15 14:17:09.087232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.071 [2024-07-15 14:17:09.087366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.071 [2024-07-15 14:17:09.087484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.071 [2024-07-15 14:17:09.087644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.071 [2024-07-15 14:17:09.087645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:11.642 14:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:11.642 INFO: Log level set to 20 00:31:11.642 INFO: Requests: 00:31:11.642 { 00:31:11.642 "jsonrpc": "2.0", 00:31:11.642 "method": "nvmf_set_config", 00:31:11.642 "id": 1, 00:31:11.642 "params": { 00:31:11.642 "admin_cmd_passthru": { 00:31:11.642 "identify_ctrlr": true 00:31:11.642 } 00:31:11.642 } 00:31:11.642 } 00:31:11.642 00:31:11.642 INFO: response: 00:31:11.642 { 00:31:11.642 "jsonrpc": "2.0", 00:31:11.642 "id": 1, 00:31:11.642 "result": true 00:31:11.642 } 00:31:11.642 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.642 14:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.642 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:11.642 INFO: Setting log level to 20 00:31:11.642 INFO: Setting log level to 20 00:31:11.643 INFO: Log level set to 20 00:31:11.643 INFO: Log level set to 20 00:31:11.643 INFO: Requests: 00:31:11.643 { 00:31:11.643 "jsonrpc": "2.0", 00:31:11.643 "method": "framework_start_init", 00:31:11.643 "id": 1 00:31:11.643 } 00:31:11.643 00:31:11.643 INFO: Requests: 00:31:11.643 { 00:31:11.643 "jsonrpc": "2.0", 00:31:11.643 "method": "framework_start_init", 00:31:11.643 "id": 1 00:31:11.643 } 00:31:11.643 00:31:11.902 [2024-07-15 14:17:09.791175] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:11.902 INFO: response: 00:31:11.902 { 00:31:11.902 "jsonrpc": "2.0", 00:31:11.902 "id": 1, 00:31:11.902 "result": true 00:31:11.902 } 00:31:11.902 00:31:11.902 INFO: response: 00:31:11.902 { 00:31:11.902 "jsonrpc": "2.0", 00:31:11.902 "id": 1, 00:31:11.902 "result": true 00:31:11.902 } 00:31:11.902 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.902 14:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:11.902 INFO: Setting log level to 40 00:31:11.902 INFO: Setting log level to 40 00:31:11.902 INFO: Setting log level to 40 00:31:11.902 [2024-07-15 14:17:09.804495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.902 14:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:11.902 14:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.902 14:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.163 Nvme0n1 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.163 [2024-07-15 14:17:10.197148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.163 [ 00:31:12.163 { 00:31:12.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:12.163 "subtype": "Discovery", 00:31:12.163 "listen_addresses": [], 00:31:12.163 "allow_any_host": true, 00:31:12.163 "hosts": [] 00:31:12.163 }, 00:31:12.163 { 00:31:12.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.163 "subtype": "NVMe", 00:31:12.163 "listen_addresses": [ 00:31:12.163 { 00:31:12.163 "trtype": "TCP", 00:31:12.163 "adrfam": "IPv4", 00:31:12.163 "traddr": "10.0.0.2", 00:31:12.163 "trsvcid": "4420" 00:31:12.163 } 00:31:12.163 ], 00:31:12.163 "allow_any_host": true, 00:31:12.163 "hosts": [], 00:31:12.163 "serial_number": "SPDK00000000000001", 00:31:12.163 "model_number": "SPDK bdev Controller", 00:31:12.163 "max_namespaces": 1, 00:31:12.163 "min_cntlid": 1, 00:31:12.163 "max_cntlid": 65519, 00:31:12.163 "namespaces": [ 00:31:12.163 { 00:31:12.163 "nsid": 1, 00:31:12.163 "bdev_name": "Nvme0n1", 00:31:12.163 "name": "Nvme0n1", 00:31:12.163 "nguid": "363447305260549900253845000000A3", 00:31:12.163 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:31:12.163 } 00:31:12.163 ] 00:31:12.163 } 00:31:12.163 ] 00:31:12.163 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:12.163 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:12.163 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.423 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:31:12.423 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:12.423 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:12.423 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:12.424 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:12.424 14:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:12.424 rmmod nvme_tcp 00:31:12.424 rmmod nvme_fabrics 00:31:12.424 rmmod nvme_keyring 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1583529 ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1583529 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1583529 ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1583529 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.424 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1583529 00:31:12.685 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.685 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.685 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1583529' 00:31:12.685 killing process with pid 1583529 00:31:12.685 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1583529 00:31:12.685 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1583529 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:12.945 14:17:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.945 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:12.945 14:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.860 14:17:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.860 00:31:14.860 real 0m13.346s 00:31:14.860 user 0m9.699s 00:31:14.860 sys 0m6.559s 00:31:14.860 14:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.860 14:17:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:14.860 ************************************ 00:31:14.860 END TEST nvmf_identify_passthru 00:31:14.860 ************************************ 00:31:14.860 14:17:12 -- common/autotest_common.sh@1142 -- # return 0 00:31:14.860 14:17:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:14.860 14:17:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:14.860 14:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.860 14:17:12 -- common/autotest_common.sh@10 -- # set +x 00:31:15.122 ************************************ 00:31:15.122 START TEST nvmf_dif 00:31:15.122 ************************************ 00:31:15.122 14:17:12 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:15.122 * Looking for test storage... 00:31:15.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.122 14:17:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.122 14:17:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.122 14:17:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.122 14:17:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.122 14:17:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.122 14:17:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.122 14:17:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:15.122 14:17:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:15.122 14:17:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.122 14:17:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:15.122 14:17:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:15.122 14:17:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:15.122 14:17:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.310 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.310 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.310 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.310 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.310 14:17:20 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.310 14:17:21 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:23.310 14:17:21 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.310 14:17:21 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.310 14:17:21 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.310 14:17:21 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:23.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:31:23.310 00:31:23.310 --- 10.0.0.2 ping statistics --- 00:31:23.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.311 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:31:23.311 14:17:21 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:23.311 00:31:23.311 --- 10.0.0.1 ping statistics --- 00:31:23.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.311 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:23.311 14:17:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.311 14:17:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:23.311 14:17:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:23.311 14:17:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:26.613 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:26.613 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:26.613 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:26.613 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:26.613 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:26.613 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:26.873 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.873 14:17:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:26.873 14:17:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1590025 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1590025 00:31:26.873 14:17:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1590025 ']' 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.873 14:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:26.873 [2024-07-15 14:17:24.961128] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:26.873 [2024-07-15 14:17:24.961188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.133 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.133 [2024-07-15 14:17:25.039832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.133 [2024-07-15 14:17:25.115422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.133 [2024-07-15 14:17:25.115462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.133 [2024-07-15 14:17:25.115470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.133 [2024-07-15 14:17:25.115477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.133 [2024-07-15 14:17:25.115482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.133 [2024-07-15 14:17:25.115507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:27.702 14:17:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.702 14:17:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.702 14:17:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:27.702 14:17:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.702 [2024-07-15 14:17:25.770494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.702 14:17:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.702 14:17:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.702 ************************************ 00:31:27.702 START TEST fio_dif_1_default 00:31:27.702 ************************************ 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.702 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.962 bdev_null0 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.962 [2024-07-15 14:17:25.838789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:27.962 { 00:31:27.962 "params": { 00:31:27.962 "name": "Nvme$subsystem", 00:31:27.962 "trtype": "$TEST_TRANSPORT", 00:31:27.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.962 "adrfam": "ipv4", 00:31:27.962 "trsvcid": "$NVMF_PORT", 00:31:27.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.962 "hdgst": ${hdgst:-false}, 00:31:27.962 "ddgst": ${ddgst:-false} 00:31:27.962 }, 00:31:27.962 "method": "bdev_nvme_attach_controller" 00:31:27.962 } 00:31:27.962 EOF 00:31:27.962 )") 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:27.962 "params": { 00:31:27.962 "name": "Nvme0", 00:31:27.962 "trtype": "tcp", 00:31:27.962 "traddr": "10.0.0.2", 00:31:27.962 "adrfam": "ipv4", 00:31:27.962 "trsvcid": "4420", 00:31:27.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.962 "hdgst": false, 00:31:27.962 "ddgst": false 00:31:27.962 }, 00:31:27.962 "method": "bdev_nvme_attach_controller" 00:31:27.962 }' 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.962 14:17:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.223 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:28.223 fio-3.35 00:31:28.223 Starting 1 thread 00:31:28.223 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.453 00:31:40.453 filename0: (groupid=0, jobs=1): err= 0: pid=1590591: Mon Jul 15 14:17:36 2024 00:31:40.453 read: IOPS=95, BW=384KiB/s (393kB/s)(3856KiB/10042msec) 00:31:40.453 slat (nsec): min=5402, max=71812, avg=6257.43, stdev=2724.69 00:31:40.453 clat (usec): min=40854, max=44414, avg=41648.60, stdev=499.68 00:31:40.453 lat (usec): min=40860, max=44452, avg=41654.86, stdev=500.10 00:31:40.453 clat percentiles (usec): 00:31:40.453 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:40.453 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:31:40.453 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:40.453 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:31:40.453 | 99.99th=[44303] 00:31:40.453 bw ( KiB/s): min= 352, max= 416, per=100.00%, avg=384.00, stdev=10.38, samples=20 00:31:40.453 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:31:40.453 lat (msec) : 50=100.00% 00:31:40.453 cpu : usr=95.46%, sys=4.33%, ctx=12, majf=0, minf=230 00:31:40.453 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.453 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:40.453 00:31:40.453 Run status group 0 (all jobs): 00:31:40.453 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3856KiB (3949kB), run=10042-10042msec 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 00:31:40.453 real 0m11.269s 00:31:40.453 user 0m25.101s 00:31:40.453 sys 0m0.761s 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 ************************************ 00:31:40.453 END TEST fio_dif_1_default 00:31:40.453 ************************************ 00:31:40.453 14:17:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:40.453 14:17:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:40.453 14:17:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:40.453 14:17:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 ************************************ 00:31:40.453 START TEST fio_dif_1_multi_subsystems 00:31:40.453 ************************************ 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 bdev_null0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 [2024-07-15 14:17:37.187076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 bdev_null1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.453 { 00:31:40.453 "params": { 00:31:40.453 "name": "Nvme$subsystem", 00:31:40.453 "trtype": "$TEST_TRANSPORT", 00:31:40.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.453 "adrfam": "ipv4", 00:31:40.453 "trsvcid": "$NVMF_PORT", 00:31:40.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.453 "hdgst": ${hdgst:-false}, 00:31:40.453 "ddgst": ${ddgst:-false} 00:31:40.453 }, 00:31:40.453 "method": "bdev_nvme_attach_controller" 00:31:40.453 } 00:31:40.453 EOF 00:31:40.453 )") 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:40.453 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.454 { 00:31:40.454 "params": { 00:31:40.454 "name": "Nvme$subsystem", 00:31:40.454 "trtype": "$TEST_TRANSPORT", 00:31:40.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.454 "adrfam": "ipv4", 00:31:40.454 "trsvcid": "$NVMF_PORT", 00:31:40.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.454 "hdgst": ${hdgst:-false}, 00:31:40.454 "ddgst": ${ddgst:-false} 00:31:40.454 }, 00:31:40.454 "method": "bdev_nvme_attach_controller" 00:31:40.454 } 00:31:40.454 EOF 00:31:40.454 )") 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:40.454 "params": { 00:31:40.454 "name": "Nvme0", 00:31:40.454 "trtype": "tcp", 00:31:40.454 "traddr": "10.0.0.2", 00:31:40.454 "adrfam": "ipv4", 00:31:40.454 "trsvcid": "4420", 00:31:40.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.454 "hdgst": false, 00:31:40.454 "ddgst": false 00:31:40.454 }, 00:31:40.454 "method": "bdev_nvme_attach_controller" 00:31:40.454 },{ 00:31:40.454 "params": { 00:31:40.454 "name": "Nvme1", 00:31:40.454 "trtype": "tcp", 00:31:40.454 "traddr": "10.0.0.2", 00:31:40.454 "adrfam": "ipv4", 00:31:40.454 "trsvcid": "4420", 00:31:40.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.454 "hdgst": false, 00:31:40.454 "ddgst": false 00:31:40.454 }, 00:31:40.454 "method": "bdev_nvme_attach_controller" 00:31:40.454 }' 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.454 14:17:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:40.454 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:40.454 fio-3.35 00:31:40.454 Starting 2 threads 00:31:40.454 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.448 00:31:50.449 filename0: (groupid=0, jobs=1): err= 0: pid=1593010: Mon Jul 15 14:17:48 2024 00:31:50.449 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10037msec) 00:31:50.449 slat (nsec): min=5391, max=33682, avg=7418.60, stdev=4674.24 00:31:50.449 clat (usec): min=40763, max=42601, avg=41625.07, stdev=461.08 00:31:50.449 lat (usec): min=40771, max=42634, avg=41632.49, stdev=461.94 00:31:50.449 clat percentiles (usec): 00:31:50.449 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:50.449 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:31:50.449 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:50.449 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:50.449 | 99.99th=[42730] 00:31:50.449 bw ( KiB/s): min= 352, max= 416, per=50.08%, avg=384.00, stdev=10.38, samples=20 00:31:50.449 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:31:50.449 lat (msec) : 50=100.00% 00:31:50.449 cpu : usr=96.59%, sys=3.19%, ctx=13, majf=0, minf=96 00:31:50.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.449 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:50.449 filename1: (groupid=0, jobs=1): err= 0: pid=1593011: Mon Jul 15 14:17:48 2024 00:31:50.449 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10001msec) 00:31:50.449 slat (nsec): min=5389, max=34137, avg=7255.08, stdev=4753.69 00:31:50.449 clat (usec): min=40820, max=42996, avg=41649.06, stdev=476.00 00:31:50.449 lat (usec): min=40826, max=43027, avg=41656.32, stdev=476.99 00:31:50.449 clat percentiles (usec): 00:31:50.449 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:50.449 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:31:50.449 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:50.449 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:50.449 | 99.99th=[43254] 00:31:50.449 bw ( KiB/s): min= 352, max= 384, per=49.82%, avg=382.32, stdev= 7.34, samples=19 00:31:50.449 iops : min= 88, max= 96, avg=95.58, stdev= 1.84, samples=19 00:31:50.449 lat (msec) : 50=100.00% 00:31:50.449 cpu : usr=96.76%, sys=3.02%, ctx=13, majf=0, minf=165 00:31:50.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.449 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:50.449 00:31:50.449 Run status group 0 (all jobs): 00:31:50.449 READ: bw=767KiB/s (785kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=7696KiB (7881kB), run=10001-10037msec 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.449 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.710 00:31:50.710 real 0m11.405s 00:31:50.710 user 0m36.288s 00:31:50.710 sys 0m0.995s 00:31:50.710 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 ************************************ 00:31:50.710 END TEST fio_dif_1_multi_subsystems 00:31:50.710 ************************************ 00:31:50.710 14:17:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:50.710 14:17:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:50.710 14:17:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:50.710 14:17:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 ************************************ 00:31:50.710 START TEST fio_dif_rand_params 00:31:50.710 ************************************ 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 bdev_null0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.710 [2024-07-15 14:17:48.671281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:50.710 { 00:31:50.710 "params": { 00:31:50.710 "name": "Nvme$subsystem", 00:31:50.710 "trtype": "$TEST_TRANSPORT", 00:31:50.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.710 "adrfam": "ipv4", 00:31:50.710 "trsvcid": "$NVMF_PORT", 00:31:50.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.710 "hdgst": ${hdgst:-false}, 00:31:50.710 "ddgst": ${ddgst:-false} 00:31:50.710 }, 00:31:50.710 "method": "bdev_nvme_attach_controller" 00:31:50.710 } 00:31:50.710 EOF 00:31:50.710 )") 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:50.710 "params": { 00:31:50.710 "name": "Nvme0", 00:31:50.710 "trtype": "tcp", 00:31:50.710 "traddr": "10.0.0.2", 00:31:50.710 "adrfam": "ipv4", 00:31:50.710 "trsvcid": "4420", 00:31:50.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.710 "hdgst": false, 00:31:50.710 "ddgst": false 00:31:50.710 }, 00:31:50.710 "method": "bdev_nvme_attach_controller" 00:31:50.710 }' 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.710 14:17:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:51.278 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:51.278 ... 00:31:51.278 fio-3.35 00:31:51.278 Starting 3 threads 00:31:51.278 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.858 00:31:57.858 filename0: (groupid=0, jobs=1): err= 0: pid=1595316: Mon Jul 15 14:17:54 2024 00:31:57.858 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(139MiB/5047msec) 00:31:57.858 slat (nsec): min=5442, max=31288, avg=8682.18, stdev=2137.44 00:31:57.858 clat (usec): min=5746, max=89787, avg=13612.72, stdev=11698.41 00:31:57.858 lat (usec): min=5757, max=89796, avg=13621.40, stdev=11698.27 00:31:57.858 clat percentiles (usec): 00:31:57.858 | 1.00th=[ 6194], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 8291], 00:31:57.858 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10814], 00:31:57.858 | 70.00th=[11469], 80.00th=[12387], 90.00th=[14615], 95.00th=[49546], 00:31:57.858 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[89654], 00:31:57.858 | 99.99th=[89654] 00:31:57.858 bw ( KiB/s): min=19456, max=35072, per=31.98%, avg=28313.60, stdev=4571.83, samples=10 00:31:57.858 iops : min= 152, max= 274, avg=221.20, stdev=35.72, samples=10 00:31:57.858 lat (msec) : 10=45.40%, 20=45.58%, 50=5.69%, 100=3.34% 00:31:57.858 cpu : usr=93.76%, sys=5.11%, ctx=331, majf=0, minf=86 00:31:57.858 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.858 filename0: (groupid=0, jobs=1): err= 0: pid=1595317: Mon Jul 15 14:17:54 2024 00:31:57.858 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5006msec) 00:31:57.858 slat (nsec): min=7972, max=31620, avg=9037.59, stdev=1410.87 00:31:57.858 clat (usec): min=5510, max=88790, avg=14167.83, stdev=9858.15 00:31:57.858 lat (usec): min=5518, max=88798, avg=14176.87, stdev=9858.29 00:31:57.858 clat percentiles (usec): 00:31:57.858 | 1.00th=[ 6194], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9634], 00:31:57.858 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:31:57.858 | 70.00th=[13566], 80.00th=[15008], 90.00th=[16581], 95.00th=[48497], 00:31:57.858 | 99.00th=[52691], 99.50th=[53740], 99.90th=[87557], 99.95th=[88605], 00:31:57.858 | 99.99th=[88605] 00:31:57.858 bw ( KiB/s): min=17920, max=32256, per=30.53%, avg=27033.60, stdev=4635.10, samples=10 00:31:57.858 iops : min= 140, max= 252, avg=211.20, stdev=36.21, samples=10 00:31:57.858 lat (msec) : 10=23.89%, 20=70.35%, 50=2.08%, 100=3.68% 00:31:57.858 cpu : usr=93.71%, sys=5.00%, ctx=401, majf=0, minf=152 00:31:57.858 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.858 filename0: (groupid=0, jobs=1): err= 0: pid=1595318: Mon Jul 15 14:17:54 2024 00:31:57.858 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(166MiB/5005msec) 00:31:57.858 slat (nsec): min=5444, max=33315, avg=8255.42, stdev=1219.79 00:31:57.858 clat (usec): min=4907, max=55456, avg=11329.38, stdev=8239.21 00:31:57.858 lat (usec): min=4915, max=55462, avg=11337.64, stdev=8239.39 00:31:57.858 clat percentiles (usec): 00:31:57.858 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7767], 00:31:57.858 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10421], 00:31:57.858 | 70.00th=[11076], 80.00th=[11994], 90.00th=[12911], 95.00th=[14484], 00:31:57.858 | 99.00th=[51119], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:31:57.858 | 99.99th=[55313] 00:31:57.858 bw ( KiB/s): min=26368, max=40704, per=38.22%, avg=33843.20, stdev=4653.29, samples=10 00:31:57.858 iops : min= 206, max= 318, avg=264.40, stdev=36.35, samples=10 00:31:57.858 lat (msec) : 10=52.11%, 20=43.81%, 50=2.34%, 100=1.74% 00:31:57.858 cpu : usr=95.64%, sys=4.12%, ctx=15, majf=0, minf=26 00:31:57.858 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.858 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.858 00:31:57.858 Run status group 0 (all jobs): 00:31:57.858 READ: bw=86.5MiB/s (90.7MB/s), 26.4MiB/s-33.1MiB/s (27.7MB/s-34.7MB/s), io=436MiB (458MB), run=5005-5047msec 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 bdev_null0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 [2024-07-15 14:17:54.918777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 bdev_null1 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.858 bdev_null2 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:57.858 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:57.859 { 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme$subsystem", 00:31:57.859 "trtype": "$TEST_TRANSPORT", 00:31:57.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "$NVMF_PORT", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.859 "hdgst": ${hdgst:-false}, 00:31:57.859 "ddgst": ${ddgst:-false} 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 } 00:31:57.859 EOF 00:31:57.859 )") 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:57.859 { 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme$subsystem", 00:31:57.859 "trtype": "$TEST_TRANSPORT", 00:31:57.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "$NVMF_PORT", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.859 "hdgst": ${hdgst:-false}, 00:31:57.859 "ddgst": ${ddgst:-false} 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 } 00:31:57.859 EOF 00:31:57.859 )") 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:57.859 14:17:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:57.859 { 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme$subsystem", 00:31:57.859 "trtype": "$TEST_TRANSPORT", 00:31:57.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "$NVMF_PORT", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.859 "hdgst": ${hdgst:-false}, 00:31:57.859 "ddgst": ${ddgst:-false} 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 } 00:31:57.859 EOF 00:31:57.859 )") 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme0", 00:31:57.859 "trtype": "tcp", 00:31:57.859 "traddr": "10.0.0.2", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "4420", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.859 "hdgst": false, 00:31:57.859 "ddgst": false 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 },{ 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme1", 00:31:57.859 "trtype": "tcp", 00:31:57.859 "traddr": "10.0.0.2", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "4420", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:57.859 "hdgst": false, 00:31:57.859 "ddgst": false 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 },{ 00:31:57.859 "params": { 00:31:57.859 "name": "Nvme2", 00:31:57.859 "trtype": "tcp", 00:31:57.859 "traddr": "10.0.0.2", 00:31:57.859 "adrfam": "ipv4", 00:31:57.859 "trsvcid": "4420", 00:31:57.859 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:57.859 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:57.859 "hdgst": false, 00:31:57.859 "ddgst": false 00:31:57.859 }, 00:31:57.859 "method": "bdev_nvme_attach_controller" 00:31:57.859 }' 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.859 14:17:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.859 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.859 ... 00:31:57.859 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.859 ... 00:31:57.859 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.859 ... 00:31:57.859 fio-3.35 00:31:57.859 Starting 24 threads 00:31:57.859 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.081 00:32:10.081 filename0: (groupid=0, jobs=1): err= 0: pid=1596710: Mon Jul 15 14:18:06 2024 00:32:10.081 read: IOPS=650, BW=2603KiB/s (2666kB/s)(25.4MiB/10006msec) 00:32:10.081 slat (nsec): min=5585, max=70362, avg=6888.62, stdev=2790.28 00:32:10.081 clat (usec): min=3233, max=33774, avg=24523.78, stdev=5266.87 00:32:10.081 lat (usec): min=3245, max=33780, avg=24530.67, stdev=5266.40 00:32:10.081 clat percentiles (usec): 00:32:10.081 | 1.00th=[ 7439], 5.00th=[19006], 10.00th=[19792], 20.00th=[21365], 00:32:10.081 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22676], 60.00th=[23725], 00:32:10.081 | 70.00th=[25035], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:32:10.081 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:32:10.081 | 99.99th=[33817] 00:32:10.081 bw ( KiB/s): min= 1920, max= 3193, per=5.45%, avg=2599.42, stdev=324.20, samples=19 00:32:10.081 iops : min= 480, max= 798, avg=649.79, stdev=81.01, samples=19 00:32:10.081 lat (msec) : 4=0.25%, 10=0.98%, 20=9.08%, 50=89.70% 00:32:10.081 cpu : usr=98.98%, sys=0.76%, ctx=20, majf=0, minf=9 00:32:10.081 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.081 filename0: (groupid=0, jobs=1): err= 0: pid=1596711: Mon Jul 15 14:18:06 2024 00:32:10.081 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:32:10.081 slat (nsec): min=5675, max=62582, avg=16109.17, stdev=9592.05 00:32:10.081 clat (usec): min=19174, max=54602, avg=32655.57, stdev=1565.41 00:32:10.081 lat (usec): min=19182, max=54628, avg=32671.68, stdev=1565.12 00:32:10.081 clat percentiles (usec): 00:32:10.081 | 1.00th=[30802], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.081 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.081 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.081 | 99.00th=[33817], 99.50th=[34341], 99.90th=[54789], 99.95th=[54789], 00:32:10.081 | 99.99th=[54789] 00:32:10.081 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1946.74, stdev=68.61, samples=19 00:32:10.081 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:32:10.081 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:32:10.081 cpu : usr=99.18%, sys=0.54%, ctx=13, majf=0, minf=9 00:32:10.081 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.081 filename0: (groupid=0, jobs=1): err= 0: pid=1596712: Mon Jul 15 14:18:06 2024 00:32:10.081 read: IOPS=505, BW=2021KiB/s (2070kB/s)(19.8MiB/10005msec) 00:32:10.081 slat (nsec): min=5475, max=59198, avg=14187.77, stdev=9404.84 00:32:10.081 clat (usec): min=8427, max=57314, avg=31550.54, stdev=4602.09 00:32:10.081 lat (usec): min=8433, max=57332, avg=31564.73, stdev=4603.95 00:32:10.081 clat percentiles (usec): 00:32:10.081 | 1.00th=[16450], 5.00th=[22152], 10.00th=[25035], 20.00th=[31851], 00:32:10.081 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:10.081 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[35914], 00:32:10.081 | 99.00th=[45876], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:32:10.081 | 99.99th=[57410] 00:32:10.081 bw ( KiB/s): min= 1795, max= 2304, per=4.18%, avg=1995.74, stdev=128.63, samples=19 00:32:10.081 iops : min= 448, max= 576, avg=498.89, stdev=32.22, samples=19 00:32:10.081 lat (msec) : 10=0.12%, 20=1.54%, 50=97.80%, 100=0.53% 00:32:10.081 cpu : usr=99.06%, sys=0.67%, ctx=15, majf=0, minf=9 00:32:10.081 IO depths : 1=3.5%, 2=7.8%, 4=18.5%, 8=60.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:32:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 complete : 0=0.0%, 4=92.5%, 8=2.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.081 filename0: (groupid=0, jobs=1): err= 0: pid=1596713: Mon Jul 15 14:18:06 2024 00:32:10.081 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10006msec) 00:32:10.081 slat (nsec): min=5430, max=67978, avg=16402.06, stdev=11346.10 00:32:10.081 clat (usec): min=5557, max=57542, avg=32341.29, stdev=3565.61 00:32:10.081 lat (usec): min=5563, max=57559, avg=32357.69, stdev=3566.11 00:32:10.081 clat percentiles (usec): 00:32:10.081 | 1.00th=[17957], 5.00th=[27395], 10.00th=[31851], 20.00th=[32113], 00:32:10.081 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:32:10.081 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:32:10.081 | 99.00th=[43254], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:32:10.081 | 99.99th=[57410] 00:32:10.081 bw ( KiB/s): min= 1792, max= 2080, per=4.11%, avg=1960.21, stdev=77.29, samples=19 00:32:10.081 iops : min= 448, max= 520, avg=490.05, stdev=19.32, samples=19 00:32:10.081 lat (msec) : 10=0.32%, 20=0.85%, 50=98.30%, 100=0.53% 00:32:10.081 cpu : usr=98.31%, sys=1.03%, ctx=159, majf=0, minf=9 00:32:10.081 IO depths : 1=4.3%, 2=9.0%, 4=19.4%, 8=58.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 complete : 0=0.0%, 4=92.8%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.081 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.081 filename0: (groupid=0, jobs=1): err= 0: pid=1596714: Mon Jul 15 14:18:06 2024 00:32:10.081 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10022msec) 00:32:10.081 slat (nsec): min=5586, max=74633, avg=22225.87, stdev=13133.24 00:32:10.081 clat (usec): min=5485, max=34317, avg=32234.12, stdev=2716.19 00:32:10.082 lat (usec): min=5501, max=34332, avg=32256.35, stdev=2716.56 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[14484], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:32:10.082 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:32:10.082 | 99.99th=[34341] 00:32:10.082 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=1970.95, stdev=96.30, samples=20 00:32:10.082 iops : min= 480, max= 576, avg=492.70, stdev=24.05, samples=20 00:32:10.082 lat (msec) : 10=0.65%, 20=0.65%, 50=98.71% 00:32:10.082 cpu : usr=98.38%, sys=0.98%, ctx=80, majf=0, minf=9 00:32:10.082 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename0: (groupid=0, jobs=1): err= 0: pid=1596715: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=491, BW=1964KiB/s (2012kB/s)(19.2MiB/10002msec) 00:32:10.082 slat (nsec): min=5581, max=73934, avg=15670.32, stdev=12002.41 00:32:10.082 clat (usec): min=14523, max=43653, avg=32450.78, stdev=1611.76 00:32:10.082 lat (usec): min=14531, max=43661, avg=32466.45, stdev=1612.17 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[22152], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[34341], 99.50th=[34341], 99.90th=[36439], 99.95th=[42730], 00:32:10.082 | 99.99th=[43779] 00:32:10.082 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1966.74, stdev=63.78, samples=19 00:32:10.082 iops : min= 479, max= 512, avg=491.68, stdev=15.94, samples=19 00:32:10.082 lat (msec) : 20=0.33%, 50=99.67% 00:32:10.082 cpu : usr=98.99%, sys=0.69%, ctx=74, majf=0, minf=9 00:32:10.082 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename0: (groupid=0, jobs=1): err= 0: pid=1596716: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10022msec) 00:32:10.082 slat (nsec): min=4077, max=63226, avg=16660.83, stdev=11314.09 00:32:10.082 clat (usec): min=17968, max=54355, avg=32442.55, stdev=2625.85 00:32:10.082 lat (usec): min=17975, max=54368, avg=32459.21, stdev=2626.37 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[21103], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[41157], 99.50th=[46924], 99.90th=[54264], 99.95th=[54264], 00:32:10.082 | 99.99th=[54264] 00:32:10.082 bw ( KiB/s): min= 1788, max= 2112, per=4.11%, avg=1961.20, stdev=79.03, samples=20 00:32:10.082 iops : min= 447, max= 528, avg=490.30, stdev=19.76, samples=20 00:32:10.082 lat (msec) : 20=0.75%, 50=98.92%, 100=0.33% 00:32:10.082 cpu : usr=99.24%, sys=0.48%, ctx=15, majf=0, minf=9 00:32:10.082 IO depths : 1=5.4%, 2=11.4%, 4=24.1%, 8=52.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename0: (groupid=0, jobs=1): err= 0: pid=1596717: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10001msec) 00:32:10.082 slat (nsec): min=4482, max=69080, avg=19506.06, stdev=11614.14 00:32:10.082 clat (usec): min=15158, max=51419, avg=32505.59, stdev=1429.27 00:32:10.082 lat (usec): min=15167, max=51426, avg=32525.09, stdev=1429.49 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[34341], 99.50th=[34341], 99.90th=[44827], 99.95th=[44827], 00:32:10.082 | 99.99th=[51643] 00:32:10.082 bw ( KiB/s): min= 1916, max= 2048, per=4.11%, avg=1960.16, stdev=61.33, samples=19 00:32:10.082 iops : min= 479, max= 512, avg=490.00, stdev=15.36, samples=19 00:32:10.082 lat (msec) : 20=0.33%, 50=99.63%, 100=0.04% 00:32:10.082 cpu : usr=99.16%, sys=0.57%, ctx=15, majf=0, minf=9 00:32:10.082 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename1: (groupid=0, jobs=1): err= 0: pid=1596718: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:32:10.082 slat (nsec): min=5588, max=68497, avg=19231.74, stdev=12177.41 00:32:10.082 clat (usec): min=20392, max=62566, avg=32643.17, stdev=1905.02 00:32:10.082 lat (usec): min=20399, max=62598, avg=32662.40, stdev=1904.72 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[33817], 99.50th=[34341], 99.90th=[62653], 99.95th=[62653], 00:32:10.082 | 99.99th=[62653] 00:32:10.082 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1946.47, stdev=68.21, samples=19 00:32:10.082 iops : min= 448, max= 512, avg=486.58, stdev=16.99, samples=19 00:32:10.082 lat (msec) : 50=99.67%, 100=0.33% 00:32:10.082 cpu : usr=99.31%, sys=0.43%, ctx=9, majf=0, minf=9 00:32:10.082 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename1: (groupid=0, jobs=1): err= 0: pid=1596719: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=486, BW=1948KiB/s (1995kB/s)(19.1MiB/10021msec) 00:32:10.082 slat (nsec): min=5189, max=73766, avg=9223.08, stdev=6501.58 00:32:10.082 clat (usec): min=23624, max=66401, avg=32775.43, stdev=2141.21 00:32:10.082 lat (usec): min=23631, max=66421, avg=32784.65, stdev=2141.19 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[34341], 99.50th=[41157], 99.90th=[66323], 99.95th=[66323], 00:32:10.082 | 99.99th=[66323] 00:32:10.082 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1945.35, stdev=66.57, samples=20 00:32:10.082 iops : min= 448, max= 512, avg=486.30, stdev=16.59, samples=20 00:32:10.082 lat (msec) : 50=99.67%, 100=0.33% 00:32:10.082 cpu : usr=99.14%, sys=0.60%, ctx=11, majf=0, minf=12 00:32:10.082 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename1: (groupid=0, jobs=1): err= 0: pid=1596720: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10022msec) 00:32:10.082 slat (nsec): min=4371, max=64670, avg=13600.82, stdev=10069.14 00:32:10.082 clat (usec): min=19291, max=54746, avg=32635.70, stdev=1750.14 00:32:10.082 lat (usec): min=19298, max=54759, avg=32649.30, stdev=1750.07 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[34341], 99.50th=[42206], 99.90th=[54789], 99.95th=[54789], 00:32:10.082 | 99.99th=[54789] 00:32:10.082 bw ( KiB/s): min= 1795, max= 2048, per=4.09%, avg=1952.05, stdev=69.64, samples=20 00:32:10.082 iops : min= 448, max= 512, avg=487.90, stdev=17.47, samples=20 00:32:10.082 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:32:10.082 cpu : usr=99.13%, sys=0.54%, ctx=66, majf=0, minf=9 00:32:10.082 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename1: (groupid=0, jobs=1): err= 0: pid=1596721: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10015msec) 00:32:10.082 slat (nsec): min=5626, max=72686, avg=10217.73, stdev=6975.29 00:32:10.082 clat (usec): min=3261, max=40853, avg=32114.50, stdev=3407.11 00:32:10.082 lat (usec): min=3275, max=40860, avg=32124.72, stdev=3406.17 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[ 8717], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.082 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:32:10.082 | 99.99th=[40633] 00:32:10.082 bw ( KiB/s): min= 1920, max= 2432, per=4.16%, avg=1983.90, stdev=120.86, samples=20 00:32:10.082 iops : min= 480, max= 608, avg=495.90, stdev=30.22, samples=20 00:32:10.082 lat (msec) : 4=0.12%, 10=1.11%, 20=0.38%, 50=98.39% 00:32:10.082 cpu : usr=99.25%, sys=0.46%, ctx=21, majf=0, minf=9 00:32:10.082 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.082 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.082 filename1: (groupid=0, jobs=1): err= 0: pid=1596722: Mon Jul 15 14:18:06 2024 00:32:10.082 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:32:10.082 slat (nsec): min=5546, max=69417, avg=18344.85, stdev=11959.51 00:32:10.082 clat (usec): min=10181, max=57367, avg=32525.22, stdev=2353.01 00:32:10.082 lat (usec): min=10187, max=57383, avg=32543.57, stdev=2353.16 00:32:10.082 clat percentiles (usec): 00:32:10.082 | 1.00th=[28705], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:32:10.082 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.082 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.082 | 99.00th=[34341], 99.50th=[43779], 99.90th=[57410], 99.95th=[57410], 00:32:10.082 | 99.99th=[57410] 00:32:10.083 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1946.74, stdev=68.61, samples=19 00:32:10.083 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:32:10.083 lat (msec) : 20=0.78%, 50=98.90%, 100=0.33% 00:32:10.083 cpu : usr=99.11%, sys=0.62%, ctx=12, majf=0, minf=9 00:32:10.083 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename1: (groupid=0, jobs=1): err= 0: pid=1596723: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:32:10.083 slat (nsec): min=5628, max=67252, avg=19834.86, stdev=11876.41 00:32:10.083 clat (usec): min=11292, max=62823, avg=32530.04, stdev=1980.35 00:32:10.083 lat (usec): min=11298, max=62844, avg=32549.87, stdev=1980.80 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[30802], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[34341], 99.90th=[53740], 99.95th=[53740], 00:32:10.083 | 99.99th=[62653] 00:32:10.083 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1953.00, stdev=71.79, samples=19 00:32:10.083 iops : min= 448, max= 512, avg=488.21, stdev=17.90, samples=19 00:32:10.083 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:32:10.083 cpu : usr=98.25%, sys=1.10%, ctx=169, majf=0, minf=9 00:32:10.083 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename1: (groupid=0, jobs=1): err= 0: pid=1596724: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.2MiB/10023msec) 00:32:10.083 slat (nsec): min=4342, max=63471, avg=11212.02, stdev=7962.81 00:32:10.083 clat (usec): min=12225, max=43858, avg=32551.59, stdev=1315.61 00:32:10.083 lat (usec): min=12266, max=43867, avg=32562.80, stdev=1316.05 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[23462], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[33817], 99.50th=[34341], 99.90th=[43254], 99.95th=[43779], 00:32:10.083 | 99.99th=[43779] 00:32:10.083 bw ( KiB/s): min= 1920, max= 2048, per=4.10%, avg=1958.30, stdev=59.70, samples=20 00:32:10.083 iops : min= 480, max= 512, avg=489.50, stdev=14.89, samples=20 00:32:10.083 lat (msec) : 20=0.04%, 50=99.96% 00:32:10.083 cpu : usr=99.25%, sys=0.48%, ctx=10, majf=0, minf=9 00:32:10.083 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename1: (groupid=0, jobs=1): err= 0: pid=1596725: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=491, BW=1967KiB/s (2015kB/s)(19.2MiB/10019msec) 00:32:10.083 slat (nsec): min=5598, max=70558, avg=13933.75, stdev=10827.15 00:32:10.083 clat (usec): min=15063, max=35229, avg=32414.20, stdev=1662.50 00:32:10.083 lat (usec): min=15074, max=35271, avg=32428.13, stdev=1662.80 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[25297], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:32:10.083 | 99.99th=[35390] 00:32:10.083 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1964.40, stdev=62.95, samples=20 00:32:10.083 iops : min= 479, max= 512, avg=491.10, stdev=15.74, samples=20 00:32:10.083 lat (msec) : 20=0.65%, 50=99.35% 00:32:10.083 cpu : usr=98.87%, sys=0.77%, ctx=52, majf=0, minf=9 00:32:10.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename2: (groupid=0, jobs=1): err= 0: pid=1596726: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:32:10.083 slat (nsec): min=5543, max=67867, avg=15087.64, stdev=11677.65 00:32:10.083 clat (usec): min=10222, max=60802, avg=32576.70, stdev=2643.89 00:32:10.083 lat (usec): min=10228, max=60824, avg=32591.79, stdev=2643.81 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[21627], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[46400], 99.90th=[60556], 99.95th=[60556], 00:32:10.083 | 99.99th=[60556] 00:32:10.083 bw ( KiB/s): min= 1792, max= 2052, per=4.08%, avg=1946.74, stdev=69.04, samples=19 00:32:10.083 iops : min= 448, max= 513, avg=486.68, stdev=17.26, samples=19 00:32:10.083 lat (msec) : 20=0.90%, 50=98.77%, 100=0.33% 00:32:10.083 cpu : usr=98.37%, sys=1.02%, ctx=130, majf=0, minf=9 00:32:10.083 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename2: (groupid=0, jobs=1): err= 0: pid=1596727: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10015msec) 00:32:10.083 slat (nsec): min=5598, max=67609, avg=9710.23, stdev=5307.79 00:32:10.083 clat (usec): min=4792, max=42829, avg=32012.57, stdev=3531.60 00:32:10.083 lat (usec): min=4829, max=42836, avg=32022.28, stdev=3530.44 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[ 7439], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:32:10.083 | 99.99th=[42730] 00:32:10.083 bw ( KiB/s): min= 1920, max= 2432, per=4.17%, avg=1990.30, stdev=120.69, samples=20 00:32:10.083 iops : min= 480, max= 608, avg=497.50, stdev=30.18, samples=20 00:32:10.083 lat (msec) : 10=1.28%, 20=0.60%, 50=98.12% 00:32:10.083 cpu : usr=98.41%, sys=0.99%, ctx=111, majf=0, minf=9 00:32:10.083 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename2: (groupid=0, jobs=1): err= 0: pid=1596728: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10015msec) 00:32:10.083 slat (nsec): min=5629, max=60326, avg=14275.59, stdev=9426.21 00:32:10.083 clat (usec): min=25377, max=66580, avg=32709.55, stdev=2088.34 00:32:10.083 lat (usec): min=25383, max=66601, avg=32723.83, stdev=2087.75 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32900], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[34341], 99.90th=[66323], 99.95th=[66323], 00:32:10.083 | 99.99th=[66323] 00:32:10.083 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1945.35, stdev=66.57, samples=20 00:32:10.083 iops : min= 448, max= 512, avg=486.30, stdev=16.59, samples=20 00:32:10.083 lat (msec) : 50=99.67%, 100=0.33% 00:32:10.083 cpu : usr=99.09%, sys=0.63%, ctx=11, majf=0, minf=9 00:32:10.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename2: (groupid=0, jobs=1): err= 0: pid=1596729: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10021msec) 00:32:10.083 slat (nsec): min=5669, max=70038, avg=17859.35, stdev=10493.85 00:32:10.083 clat (usec): min=12029, max=52432, avg=32573.76, stdev=1619.97 00:32:10.083 lat (usec): min=12035, max=52471, avg=32591.62, stdev=1620.11 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[30802], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[33817], 99.50th=[34341], 99.90th=[52167], 99.95th=[52167], 00:32:10.083 | 99.99th=[52691] 00:32:10.083 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1946.74, stdev=68.61, samples=19 00:32:10.083 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:32:10.083 lat (msec) : 20=0.08%, 50=99.59%, 100=0.33% 00:32:10.083 cpu : usr=99.04%, sys=0.62%, ctx=69, majf=0, minf=9 00:32:10.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.083 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.083 filename2: (groupid=0, jobs=1): err= 0: pid=1596730: Mon Jul 15 14:18:06 2024 00:32:10.083 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10009msec) 00:32:10.083 slat (nsec): min=5610, max=63371, avg=13648.32, stdev=8647.79 00:32:10.083 clat (usec): min=15270, max=49054, avg=32476.33, stdev=1827.93 00:32:10.083 lat (usec): min=15287, max=49072, avg=32489.98, stdev=1827.97 00:32:10.083 clat percentiles (usec): 00:32:10.083 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:32:10.083 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.083 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.083 | 99.00th=[34341], 99.50th=[39060], 99.90th=[49021], 99.95th=[49021], 00:32:10.083 | 99.99th=[49021] 00:32:10.083 bw ( KiB/s): min= 1920, max= 2064, per=4.11%, avg=1961.16, stdev=62.01, samples=19 00:32:10.084 iops : min= 480, max= 516, avg=490.21, stdev=15.48, samples=19 00:32:10.084 lat (msec) : 20=0.28%, 50=99.72% 00:32:10.084 cpu : usr=99.09%, sys=0.58%, ctx=58, majf=0, minf=9 00:32:10.084 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.084 filename2: (groupid=0, jobs=1): err= 0: pid=1596731: Mon Jul 15 14:18:06 2024 00:32:10.084 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:32:10.084 slat (nsec): min=5507, max=73350, avg=20057.12, stdev=11170.41 00:32:10.084 clat (usec): min=10027, max=52082, avg=32527.81, stdev=2053.54 00:32:10.084 lat (usec): min=10033, max=52104, avg=32547.87, stdev=2053.88 00:32:10.084 clat percentiles (usec): 00:32:10.084 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.084 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:10.084 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:10.084 | 99.00th=[34341], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:32:10.084 | 99.99th=[52167] 00:32:10.084 bw ( KiB/s): min= 1795, max= 2048, per=4.08%, avg=1946.42, stdev=67.93, samples=19 00:32:10.084 iops : min= 448, max= 512, avg=486.53, stdev=17.02, samples=19 00:32:10.084 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:32:10.084 cpu : usr=99.14%, sys=0.58%, ctx=8, majf=0, minf=9 00:32:10.084 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.084 filename2: (groupid=0, jobs=1): err= 0: pid=1596732: Mon Jul 15 14:18:06 2024 00:32:10.084 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10014msec) 00:32:10.084 slat (nsec): min=5596, max=70789, avg=16570.57, stdev=12931.41 00:32:10.084 clat (usec): min=19097, max=66390, avg=32688.65, stdev=2211.03 00:32:10.084 lat (usec): min=19110, max=66411, avg=32705.22, stdev=2210.51 00:32:10.084 clat percentiles (usec): 00:32:10.084 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.084 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:32:10.084 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:32:10.084 | 99.00th=[34341], 99.50th=[41157], 99.90th=[66323], 99.95th=[66323], 00:32:10.084 | 99.99th=[66323] 00:32:10.084 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1945.35, stdev=66.57, samples=20 00:32:10.084 iops : min= 448, max= 512, avg=486.30, stdev=16.59, samples=20 00:32:10.084 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:32:10.084 cpu : usr=98.33%, sys=1.01%, ctx=97, majf=0, minf=9 00:32:10.084 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.084 filename2: (groupid=0, jobs=1): err= 0: pid=1596733: Mon Jul 15 14:18:06 2024 00:32:10.084 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10005msec) 00:32:10.084 slat (nsec): min=5633, max=70386, avg=19484.94, stdev=12062.28 00:32:10.084 clat (usec): min=10107, max=75454, avg=32792.08, stdev=3090.84 00:32:10.084 lat (usec): min=10113, max=75473, avg=32811.56, stdev=3090.74 00:32:10.084 clat percentiles (usec): 00:32:10.084 | 1.00th=[21627], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:32:10.084 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:32:10.084 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[34341], 00:32:10.084 | 99.00th=[44827], 99.50th=[48497], 99.90th=[57410], 99.95th=[57410], 00:32:10.084 | 99.99th=[74974] 00:32:10.084 bw ( KiB/s): min= 1792, max= 2048, per=4.04%, avg=1929.16, stdev=61.74, samples=19 00:32:10.084 iops : min= 448, max= 512, avg=482.21, stdev=15.42, samples=19 00:32:10.084 lat (msec) : 20=0.84%, 50=98.66%, 100=0.49% 00:32:10.084 cpu : usr=99.12%, sys=0.61%, ctx=11, majf=0, minf=9 00:32:10.084 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.084 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:10.084 00:32:10.084 Run status group 0 (all jobs): 00:32:10.084 READ: bw=46.6MiB/s (48.8MB/s), 1941KiB/s-2603KiB/s (1987kB/s-2666kB/s), io=467MiB (490MB), run=10001-10023msec 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 bdev_null0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 [2024-07-15 14:18:06.626296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.084 bdev_null1 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.084 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:10.085 { 00:32:10.085 "params": { 00:32:10.085 "name": "Nvme$subsystem", 00:32:10.085 "trtype": "$TEST_TRANSPORT", 00:32:10.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.085 "adrfam": "ipv4", 00:32:10.085 "trsvcid": "$NVMF_PORT", 00:32:10.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.085 "hdgst": ${hdgst:-false}, 00:32:10.085 "ddgst": ${ddgst:-false} 00:32:10.085 }, 00:32:10.085 "method": "bdev_nvme_attach_controller" 00:32:10.085 } 00:32:10.085 EOF 00:32:10.085 )") 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:10.085 { 00:32:10.085 "params": { 00:32:10.085 "name": "Nvme$subsystem", 00:32:10.085 "trtype": "$TEST_TRANSPORT", 00:32:10.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.085 "adrfam": "ipv4", 00:32:10.085 "trsvcid": "$NVMF_PORT", 00:32:10.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.085 "hdgst": ${hdgst:-false}, 00:32:10.085 "ddgst": ${ddgst:-false} 00:32:10.085 }, 00:32:10.085 "method": "bdev_nvme_attach_controller" 00:32:10.085 } 00:32:10.085 EOF 00:32:10.085 )") 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:10.085 "params": { 00:32:10.085 "name": "Nvme0", 00:32:10.085 "trtype": "tcp", 00:32:10.085 "traddr": "10.0.0.2", 00:32:10.085 "adrfam": "ipv4", 00:32:10.085 "trsvcid": "4420", 00:32:10.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:10.085 "hdgst": false, 00:32:10.085 "ddgst": false 00:32:10.085 }, 00:32:10.085 "method": "bdev_nvme_attach_controller" 00:32:10.085 },{ 00:32:10.085 "params": { 00:32:10.085 "name": "Nvme1", 00:32:10.085 "trtype": "tcp", 00:32:10.085 "traddr": "10.0.0.2", 00:32:10.085 "adrfam": "ipv4", 00:32:10.085 "trsvcid": "4420", 00:32:10.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.085 "hdgst": false, 00:32:10.085 "ddgst": false 00:32:10.085 }, 00:32:10.085 "method": "bdev_nvme_attach_controller" 00:32:10.085 }' 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:10.085 14:18:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:10.085 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:10.085 ... 00:32:10.085 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:10.085 ... 00:32:10.085 fio-3.35 00:32:10.085 Starting 4 threads 00:32:10.085 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.423 00:32:15.423 filename0: (groupid=0, jobs=1): err= 0: pid=1599268: Mon Jul 15 14:18:12 2024 00:32:15.423 read: IOPS=2086, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5003msec) 00:32:15.423 slat (nsec): min=5399, max=62188, avg=5993.09, stdev=2080.45 00:32:15.423 clat (usec): min=2270, max=6608, avg=3817.86, stdev=626.27 00:32:15.423 lat (usec): min=2275, max=6614, avg=3823.86, stdev=626.24 00:32:15.423 clat percentiles (usec): 00:32:15.423 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3359], 00:32:15.423 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3818], 00:32:15.423 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4621], 95.00th=[ 5211], 00:32:15.423 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6390], 00:32:15.423 | 99.99th=[ 6587] 00:32:15.423 bw ( KiB/s): min=16016, max=17056, per=24.95%, avg=16739.56, stdev=308.88, samples=9 00:32:15.423 iops : min= 2002, max= 2132, avg=2092.44, stdev=38.61, samples=9 00:32:15.423 lat (msec) : 4=73.57%, 10=26.43% 00:32:15.423 cpu : usr=97.54%, sys=2.24%, ctx=3, majf=0, minf=0 00:32:15.423 IO depths : 1=0.4%, 2=1.1%, 4=70.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 issued rwts: total=10437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:15.423 filename0: (groupid=0, jobs=1): err= 0: pid=1599269: Mon Jul 15 14:18:12 2024 00:32:15.423 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5001msec) 00:32:15.423 slat (nsec): min=5419, max=64090, avg=5964.59, stdev=1996.77 00:32:15.423 clat (usec): min=1074, max=8179, avg=3951.20, stdev=724.39 00:32:15.423 lat (usec): min=1079, max=8212, avg=3957.17, stdev=724.40 00:32:15.423 clat percentiles (usec): 00:32:15.423 | 1.00th=[ 2802], 5.00th=[ 3195], 10.00th=[ 3326], 20.00th=[ 3490], 00:32:15.423 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:32:15.423 | 70.00th=[ 4015], 80.00th=[ 4178], 90.00th=[ 5211], 95.00th=[ 5669], 00:32:15.423 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 7177], 99.95th=[ 7373], 00:32:15.423 | 99.99th=[ 8094] 00:32:15.423 bw ( KiB/s): min=15712, max=16480, per=24.02%, avg=16112.00, stdev=243.05, samples=9 00:32:15.423 iops : min= 1964, max= 2060, avg=2014.00, stdev=30.38, samples=9 00:32:15.423 lat (msec) : 2=0.12%, 4=69.73%, 10=30.15% 00:32:15.423 cpu : usr=97.30%, sys=2.46%, ctx=7, majf=0, minf=9 00:32:15.423 IO depths : 1=0.2%, 2=0.5%, 4=72.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 issued rwts: total=10083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:15.423 filename1: (groupid=0, jobs=1): err= 0: pid=1599270: Mon Jul 15 14:18:12 2024 00:32:15.423 read: IOPS=2283, BW=17.8MiB/s (18.7MB/s)(89.2MiB/5002msec) 00:32:15.423 slat (nsec): min=5403, max=77006, avg=6202.91, stdev=2054.92 00:32:15.423 clat (usec): min=1596, max=6332, avg=3486.17, stdev=598.74 00:32:15.423 lat (usec): min=1604, max=6337, avg=3492.37, stdev=598.67 00:32:15.423 clat percentiles (usec): 00:32:15.423 | 1.00th=[ 2376], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 2966], 00:32:15.423 | 30.00th=[ 3163], 40.00th=[ 3261], 50.00th=[ 3425], 60.00th=[ 3556], 00:32:15.423 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4359], 95.00th=[ 4621], 00:32:15.423 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5866], 99.95th=[ 6128], 00:32:15.423 | 99.99th=[ 6325] 00:32:15.423 bw ( KiB/s): min=17920, max=19152, per=27.25%, avg=18279.11, stdev=418.84, samples=9 00:32:15.423 iops : min= 2240, max= 2394, avg=2284.89, stdev=52.36, samples=9 00:32:15.423 lat (msec) : 2=0.09%, 4=83.35%, 10=16.56% 00:32:15.423 cpu : usr=98.02%, sys=1.72%, ctx=6, majf=0, minf=9 00:32:15.423 IO depths : 1=0.1%, 2=3.3%, 4=66.0%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 issued rwts: total=11422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:15.423 filename1: (groupid=0, jobs=1): err= 0: pid=1599272: Mon Jul 15 14:18:12 2024 00:32:15.423 read: IOPS=2001, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5001msec) 00:32:15.423 slat (nsec): min=5404, max=71992, avg=5910.78, stdev=1997.90 00:32:15.423 clat (usec): min=928, max=7725, avg=3980.74, stdev=738.30 00:32:15.423 lat (usec): min=933, max=7758, avg=3986.66, stdev=738.26 00:32:15.423 clat percentiles (usec): 00:32:15.423 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3490], 00:32:15.423 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:32:15.423 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 5669], 00:32:15.423 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6718], 99.95th=[ 6915], 00:32:15.423 | 99.99th=[ 7635] 00:32:15.423 bw ( KiB/s): min=15792, max=16288, per=23.81%, avg=15971.56, stdev=173.76, samples=9 00:32:15.423 iops : min= 1974, max= 2036, avg=1996.44, stdev=21.72, samples=9 00:32:15.423 lat (usec) : 1000=0.03% 00:32:15.423 lat (msec) : 2=0.12%, 4=67.67%, 10=32.18% 00:32:15.423 cpu : usr=97.52%, sys=2.24%, ctx=6, majf=0, minf=9 00:32:15.423 IO depths : 1=0.3%, 2=0.7%, 4=71.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.423 issued rwts: total=10008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:15.423 00:32:15.423 Run status group 0 (all jobs): 00:32:15.423 READ: bw=65.5MiB/s (68.7MB/s), 15.6MiB/s-17.8MiB/s (16.4MB/s-18.7MB/s), io=328MiB (344MB), run=5001-5003msec 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.423 00:32:15.423 real 0m24.277s 00:32:15.423 user 5m15.515s 00:32:15.423 sys 0m3.875s 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.423 14:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.423 ************************************ 00:32:15.423 END TEST fio_dif_rand_params 00:32:15.423 ************************************ 00:32:15.424 14:18:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:15.424 14:18:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:15.424 14:18:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:15.424 14:18:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.424 14:18:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:15.424 ************************************ 00:32:15.424 START TEST fio_dif_digest 00:32:15.424 ************************************ 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:15.424 14:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.424 bdev_null0 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.424 [2024-07-15 14:18:13.030411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:15.424 { 00:32:15.424 "params": { 00:32:15.424 "name": "Nvme$subsystem", 00:32:15.424 "trtype": "$TEST_TRANSPORT", 00:32:15.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.424 "adrfam": "ipv4", 00:32:15.424 "trsvcid": "$NVMF_PORT", 00:32:15.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.424 "hdgst": ${hdgst:-false}, 00:32:15.424 "ddgst": ${ddgst:-false} 00:32:15.424 }, 00:32:15.424 "method": "bdev_nvme_attach_controller" 00:32:15.424 } 00:32:15.424 EOF 00:32:15.424 )") 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:15.424 "params": { 00:32:15.424 "name": "Nvme0", 00:32:15.424 "trtype": "tcp", 00:32:15.424 "traddr": "10.0.0.2", 00:32:15.424 "adrfam": "ipv4", 00:32:15.424 "trsvcid": "4420", 00:32:15.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.424 "hdgst": true, 00:32:15.424 "ddgst": true 00:32:15.424 }, 00:32:15.424 "method": "bdev_nvme_attach_controller" 00:32:15.424 }' 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:15.424 14:18:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.424 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:15.424 ... 00:32:15.424 fio-3.35 00:32:15.424 Starting 3 threads 00:32:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.655 00:32:27.655 filename0: (groupid=0, jobs=1): err= 0: pid=1600995: Mon Jul 15 14:18:23 2024 00:32:27.655 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10047msec) 00:32:27.655 slat (nsec): min=5779, max=32040, avg=6520.42, stdev=1144.67 00:32:27.655 clat (usec): min=8387, max=57386, avg=13829.79, stdev=2789.07 00:32:27.655 lat (usec): min=8393, max=57393, avg=13836.31, stdev=2789.01 00:32:27.655 clat percentiles (usec): 00:32:27.655 | 1.00th=[ 9896], 5.00th=[11863], 10.00th=[12256], 20.00th=[12911], 00:32:27.655 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:32:27.655 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:32:27.655 | 99.00th=[16581], 99.50th=[17433], 99.90th=[56361], 99.95th=[56886], 00:32:27.655 | 99.99th=[57410] 00:32:27.655 bw ( KiB/s): min=22784, max=30208, per=33.40%, avg=27814.40, stdev=1378.04, samples=20 00:32:27.655 iops : min= 178, max= 236, avg=217.30, stdev=10.77, samples=20 00:32:27.655 lat (msec) : 10=1.10%, 20=98.53%, 100=0.37% 00:32:27.655 cpu : usr=95.53%, sys=4.26%, ctx=26, majf=0, minf=73 00:32:27.655 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.655 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.655 filename0: (groupid=0, jobs=1): err= 0: pid=1600996: Mon Jul 15 14:18:23 2024 00:32:27.655 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(282MiB/10047msec) 00:32:27.655 slat (nsec): min=5773, max=33804, avg=7730.02, stdev=1606.42 00:32:27.655 clat (usec): min=7090, max=95493, avg=13336.79, stdev=3339.16 00:32:27.655 lat (usec): min=7107, max=95502, avg=13344.52, stdev=3339.19 00:32:27.655 clat percentiles (usec): 00:32:27.655 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[11731], 20.00th=[12256], 00:32:27.655 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:32:27.655 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:32:27.655 | 99.00th=[16057], 99.50th=[16581], 99.90th=[55837], 99.95th=[56886], 00:32:27.655 | 99.99th=[95945] 00:32:27.655 bw ( KiB/s): min=23552, max=31232, per=34.63%, avg=28838.40, stdev=1608.99, samples=20 00:32:27.655 iops : min= 184, max= 244, avg=225.30, stdev=12.57, samples=20 00:32:27.655 lat (msec) : 10=1.33%, 20=98.23%, 50=0.04%, 100=0.40% 00:32:27.655 cpu : usr=95.60%, sys=4.17%, ctx=23, majf=0, minf=170 00:32:27.655 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.655 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.655 filename0: (groupid=0, jobs=1): err= 0: pid=1600997: Mon Jul 15 14:18:23 2024 00:32:27.655 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(263MiB/10005msec) 00:32:27.655 slat (nsec): min=5693, max=31741, avg=6494.90, stdev=944.18 00:32:27.655 clat (usec): min=7885, max=56422, avg=14243.15, stdev=2553.48 00:32:27.655 lat (usec): min=7891, max=56454, avg=14249.65, stdev=2553.77 00:32:27.655 clat percentiles (usec): 00:32:27.655 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12649], 20.00th=[13173], 00:32:27.655 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:32:27.655 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:32:27.655 | 99.00th=[17171], 99.50th=[17433], 99.90th=[56361], 99.95th=[56361], 00:32:27.655 | 99.99th=[56361] 00:32:27.655 bw ( KiB/s): min=24625, max=29184, per=32.40%, avg=26976.89, stdev=1094.25, samples=19 00:32:27.655 iops : min= 192, max= 228, avg=210.74, stdev= 8.59, samples=19 00:32:27.655 lat (msec) : 10=1.09%, 20=98.62%, 100=0.28% 00:32:27.655 cpu : usr=96.10%, sys=3.68%, ctx=30, majf=0, minf=151 00:32:27.655 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.656 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.656 00:32:27.656 Run status group 0 (all jobs): 00:32:27.656 READ: bw=81.3MiB/s (85.3MB/s), 26.3MiB/s-28.1MiB/s (27.6MB/s-29.4MB/s), io=817MiB (857MB), run=10005-10047msec 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.656 00:32:27.656 real 0m11.088s 00:32:27.656 user 0m42.456s 00:32:27.656 sys 0m1.537s 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.656 14:18:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.656 ************************************ 00:32:27.656 END TEST fio_dif_digest 00:32:27.656 ************************************ 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:27.656 14:18:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:27.656 14:18:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:27.656 rmmod nvme_tcp 00:32:27.656 rmmod nvme_fabrics 00:32:27.656 rmmod nvme_keyring 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1590025 ']' 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1590025 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1590025 ']' 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1590025 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1590025 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1590025' 00:32:27.656 killing process with pid 1590025 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1590025 00:32:27.656 14:18:24 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1590025 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:27.656 14:18:24 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.205 Waiting for block devices as requested 00:32:30.205 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:30.205 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:30.205 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:30.205 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:30.465 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:30.465 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:30.465 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:30.726 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:30.726 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:30.987 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:30.987 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:30.987 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:30.987 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:31.257 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:31.257 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:31.257 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:31.257 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:31.257 14:18:29 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:31.257 14:18:29 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:31.257 14:18:29 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:31.257 14:18:29 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:31.257 14:18:29 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.257 14:18:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:31.257 14:18:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.851 14:18:31 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:33.851 00:32:33.851 real 1m18.434s 00:32:33.851 user 8m1.942s 00:32:33.851 sys 0m20.432s 00:32:33.851 14:18:31 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:33.851 14:18:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:33.851 ************************************ 00:32:33.851 END TEST nvmf_dif 00:32:33.851 ************************************ 00:32:33.851 14:18:31 -- common/autotest_common.sh@1142 -- # return 0 00:32:33.851 14:18:31 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:33.851 14:18:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:33.851 14:18:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.851 14:18:31 -- common/autotest_common.sh@10 -- # set +x 00:32:33.851 ************************************ 00:32:33.851 START TEST nvmf_abort_qd_sizes 00:32:33.851 ************************************ 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:33.851 * Looking for test storage... 00:32:33.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:33.851 14:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:41.996 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:41.996 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:41.996 Found net devices under 0000:31:00.0: cvl_0_0 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:41.996 Found net devices under 0000:31:00.1: cvl_0_1 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.996 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:32:41.997 00:32:41.997 --- 10.0.0.2 ping statistics --- 00:32:41.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.997 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:32:41.997 00:32:41.997 --- 10.0.0.1 ping statistics --- 00:32:41.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.997 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:41.997 14:18:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.199 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:46.199 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1611221 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1611221 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1611221 ']' 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.199 14:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 [2024-07-15 14:18:43.775280] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:46.199 [2024-07-15 14:18:43.775338] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.199 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.199 [2024-07-15 14:18:43.854400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.199 [2024-07-15 14:18:43.931202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.199 [2024-07-15 14:18:43.931243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.199 [2024-07-15 14:18:43.931253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.199 [2024-07-15 14:18:43.931260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.199 [2024-07-15 14:18:43.931266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.199 [2024-07-15 14:18:43.931442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.199 [2024-07-15 14:18:43.931460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.199 [2024-07-15 14:18:43.931598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.199 [2024-07-15 14:18:43.931599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.459 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.459 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:46.459 14:18:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:46.459 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:46.459 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.719 14:18:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.719 ************************************ 00:32:46.719 START TEST spdk_target_abort 00:32:46.719 ************************************ 00:32:46.719 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:46.719 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:46.719 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:46.719 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.719 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.980 spdk_targetn1 00:32:46.980 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.980 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:46.980 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.980 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.980 [2024-07-15 14:18:44.964851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.981 14:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.981 [2024-07-15 14:18:45.005118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.981 14:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.981 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.241 [2024-07-15 14:18:45.170004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:440 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:47.242 [2024-07-15 14:18:45.170032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0039 p:1 m:0 dnr:0 00:32:47.242 [2024-07-15 14:18:45.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2720 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:32:47.242 [2024-07-15 14:18:45.232596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.242 [2024-07-15 14:18:45.232836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2744 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:47.242 [2024-07-15 14:18:45.232848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:50.542 Initializing NVMe Controllers 00:32:50.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:50.542 Initialization complete. Launching workers. 00:32:50.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11877, failed: 3 00:32:50.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3014, failed to submit 8866 00:32:50.542 success 733, unsuccess 2281, failed 0 00:32:50.542 14:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:50.542 14:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.542 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.542 [2024-07-15 14:18:48.511917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1032 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:32:50.542 [2024-07-15 14:18:48.511958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:32:50.542 [2024-07-15 14:18:48.581873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2640 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:32:50.542 [2024-07-15 14:18:48.581900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:50.542 [2024-07-15 14:18:48.653934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:4296 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:32:50.542 [2024-07-15 14:18:48.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0024 p:1 m:0 dnr:0 00:32:53.838 Initializing NVMe Controllers 00:32:53.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.838 Initialization complete. Launching workers. 00:32:53.838 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8567, failed: 3 00:32:53.838 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 7357 00:32:53.838 success 372, unsuccess 841, failed 0 00:32:53.838 14:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.838 14:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.838 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.135 Initializing NVMe Controllers 00:32:57.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:57.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:57.135 Initialization complete. Launching workers. 00:32:57.135 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42389, failed: 0 00:32:57.135 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2628, failed to submit 39761 00:32:57.135 success 594, unsuccess 2034, failed 0 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.135 14:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1611221 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1611221 ']' 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1611221 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611221 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611221' 00:32:59.050 killing process with pid 1611221 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1611221 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1611221 00:32:59.050 00:32:59.050 real 0m12.195s 00:32:59.050 user 0m49.681s 00:32:59.050 sys 0m1.723s 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.050 14:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.050 ************************************ 00:32:59.050 END TEST spdk_target_abort 00:32:59.050 ************************************ 00:32:59.050 14:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:59.050 14:18:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:59.050 14:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:59.050 14:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.051 14:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:59.051 ************************************ 00:32:59.051 START TEST kernel_target_abort 00:32:59.051 ************************************ 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:59.051 14:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:03.290 Waiting for block devices as requested 00:33:03.290 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:03.290 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:03.549 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:03.549 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:03.549 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:03.549 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:03.810 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:03.810 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:03.810 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:03.810 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:04.071 14:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:04.071 No valid GPT data, bailing 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:33:04.071 00:33:04.071 Discovery Log Number of Records 2, Generation counter 2 00:33:04.071 =====Discovery Log Entry 0====== 00:33:04.071 trtype: tcp 00:33:04.071 adrfam: ipv4 00:33:04.071 subtype: current discovery subsystem 00:33:04.071 treq: not specified, sq flow control disable supported 00:33:04.071 portid: 1 00:33:04.071 trsvcid: 4420 00:33:04.071 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:04.071 traddr: 10.0.0.1 00:33:04.071 eflags: none 00:33:04.071 sectype: none 00:33:04.071 =====Discovery Log Entry 1====== 00:33:04.071 trtype: tcp 00:33:04.071 adrfam: ipv4 00:33:04.071 subtype: nvme subsystem 00:33:04.071 treq: not specified, sq flow control disable supported 00:33:04.071 portid: 1 00:33:04.071 trsvcid: 4420 00:33:04.071 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:04.071 traddr: 10.0.0.1 00:33:04.071 eflags: none 00:33:04.071 sectype: none 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:04.071 14:19:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.071 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.365 Initializing NVMe Controllers 00:33:07.365 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:07.365 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:07.365 Initialization complete. Launching workers. 00:33:07.365 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63715, failed: 0 00:33:07.365 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 63715, failed to submit 0 00:33:07.365 success 0, unsuccess 63715, failed 0 00:33:07.365 14:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:07.365 14:19:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:07.365 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.661 Initializing NVMe Controllers 00:33:10.661 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.661 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.661 Initialization complete. Launching workers. 00:33:10.661 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105424, failed: 0 00:33:10.661 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26558, failed to submit 78866 00:33:10.661 success 0, unsuccess 26558, failed 0 00:33:10.661 14:19:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:10.661 14:19:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.661 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.206 Initializing NVMe Controllers 00:33:13.206 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.206 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:13.206 Initialization complete. Launching workers. 00:33:13.206 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100649, failed: 0 00:33:13.206 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25154, failed to submit 75495 00:33:13.206 success 0, unsuccess 25154, failed 0 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:13.206 14:19:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.408 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.408 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:18.793 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:19.054 00:33:19.054 real 0m20.043s 00:33:19.054 user 0m9.492s 00:33:19.054 sys 0m6.208s 00:33:19.054 14:19:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:19.054 14:19:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.054 ************************************ 00:33:19.054 END TEST kernel_target_abort 00:33:19.054 ************************************ 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:19.054 rmmod nvme_tcp 00:33:19.054 rmmod nvme_fabrics 00:33:19.054 rmmod nvme_keyring 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1611221 ']' 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1611221 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1611221 ']' 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1611221 00:33:19.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1611221) - No such process 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1611221 is not found' 00:33:19.054 Process with pid 1611221 is not found 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:19.054 14:19:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:23.262 Waiting for block devices as requested 00:33:23.262 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:23.262 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:23.523 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:23.523 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:23.523 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:23.784 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:23.784 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:23.784 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:24.045 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:24.045 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:24.045 14:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.045 14:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.045 14:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.045 14:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.045 14:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.045 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.045 14:19:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.957 14:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:26.216 00:33:26.216 real 0m52.571s 00:33:26.216 user 1m4.836s 00:33:26.216 sys 0m19.267s 00:33:26.216 14:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.216 14:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 ************************************ 00:33:26.216 END TEST nvmf_abort_qd_sizes 00:33:26.216 ************************************ 00:33:26.216 14:19:24 -- common/autotest_common.sh@1142 -- # return 0 00:33:26.216 14:19:24 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:26.216 14:19:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:26.216 14:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.216 14:19:24 -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 ************************************ 00:33:26.216 START TEST keyring_file 00:33:26.216 ************************************ 00:33:26.216 14:19:24 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:26.216 * Looking for test storage... 00:33:26.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:26.216 14:19:24 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:26.216 14:19:24 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.216 14:19:24 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.216 14:19:24 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.216 14:19:24 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.216 14:19:24 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.216 14:19:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.216 14:19:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.216 14:19:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.216 14:19:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:26.217 14:19:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:26.217 14:19:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I3iUKjbqWa 00:33:26.217 14:19:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:26.217 14:19:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I3iUKjbqWa 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I3iUKjbqWa 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.I3iUKjbqWa 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9zoiKZHPQg 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:26.477 14:19:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9zoiKZHPQg 00:33:26.477 14:19:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9zoiKZHPQg 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9zoiKZHPQg 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=1621653 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1621653 00:33:26.477 14:19:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1621653 ']' 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:26.477 14:19:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.477 [2024-07-15 14:19:24.465377] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:26.477 [2024-07-15 14:19:24.465453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621653 ] 00:33:26.477 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.478 [2024-07-15 14:19:24.538424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.738 [2024-07-15 14:19:24.610887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:27.310 14:19:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.310 [2024-07-15 14:19:25.218255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.310 null0 00:33:27.310 [2024-07-15 14:19:25.250298] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:27.310 [2024-07-15 14:19:25.250532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:27.310 [2024-07-15 14:19:25.258314] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.310 14:19:25 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:27.310 14:19:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.311 [2024-07-15 14:19:25.270349] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:27.311 request: 00:33:27.311 { 00:33:27.311 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.311 "secure_channel": false, 00:33:27.311 "listen_address": { 00:33:27.311 "trtype": "tcp", 00:33:27.311 "traddr": "127.0.0.1", 00:33:27.311 "trsvcid": "4420" 00:33:27.311 }, 00:33:27.311 "method": "nvmf_subsystem_add_listener", 00:33:27.311 "req_id": 1 00:33:27.311 } 00:33:27.311 Got JSON-RPC error response 00:33:27.311 response: 00:33:27.311 { 00:33:27.311 "code": -32602, 00:33:27.311 "message": "Invalid parameters" 00:33:27.311 } 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:27.311 14:19:25 keyring_file -- keyring/file.sh@46 -- # bperfpid=1621903 00:33:27.311 14:19:25 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1621903 /var/tmp/bperf.sock 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1621903 ']' 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.311 14:19:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.311 14:19:25 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:27.311 [2024-07-15 14:19:25.332292] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:27.311 [2024-07-15 14:19:25.332356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621903 ] 00:33:27.311 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.311 [2024-07-15 14:19:25.418711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.571 [2024-07-15 14:19:25.482764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.142 14:19:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.142 14:19:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:28.142 14:19:26 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:28.142 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:28.142 14:19:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9zoiKZHPQg 00:33:28.142 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9zoiKZHPQg 00:33:28.402 14:19:26 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:28.402 14:19:26 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.402 14:19:26 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.I3iUKjbqWa == \/\t\m\p\/\t\m\p\.\I\3\i\U\K\j\b\q\W\a ]] 00:33:28.402 14:19:26 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:28.402 14:19:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.402 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.663 14:19:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9zoiKZHPQg == \/\t\m\p\/\t\m\p\.\9\z\o\i\K\Z\H\P\Q\g ]] 00:33:28.663 14:19:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:28.663 14:19:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:28.663 14:19:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.663 14:19:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.663 14:19:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.663 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.923 14:19:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:28.923 14:19:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.923 14:19:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:28.923 14:19:26 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.923 14:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.183 [2024-07-15 14:19:27.123022] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:29.183 nvme0n1 00:33:29.183 14:19:27 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:29.184 14:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:29.184 14:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.184 14:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.184 14:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.184 14:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:29.445 14:19:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:29.445 14:19:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:29.445 14:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:29.445 14:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.445 14:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.445 14:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:29.445 14:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.445 14:19:27 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:29.445 14:19:27 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.705 Running I/O for 1 seconds... 00:33:30.649 00:33:30.649 Latency(us) 00:33:30.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.649 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:30.649 nvme0n1 : 1.01 13111.67 51.22 0.00 0.00 9713.28 6963.20 19660.80 00:33:30.649 =================================================================================================================== 00:33:30.649 Total : 13111.67 51.22 0.00 0.00 9713.28 6963.20 19660.80 00:33:30.649 0 00:33:30.649 14:19:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:30.649 14:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:30.910 14:19:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.910 14:19:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:30.910 14:19:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.910 14:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.171 14:19:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:31.171 14:19:29 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.171 14:19:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.171 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.171 [2024-07-15 14:19:29.283313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:31.171 [2024-07-15 14:19:29.283724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2177450 (107): Transport endpoint is not connected 00:33:31.171 [2024-07-15 14:19:29.284720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2177450 (9): Bad file descriptor 00:33:31.171 [2024-07-15 14:19:29.285722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:31.171 [2024-07-15 14:19:29.285731] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:31.171 [2024-07-15 14:19:29.285737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:31.458 request: 00:33:31.458 { 00:33:31.458 "name": "nvme0", 00:33:31.458 "trtype": "tcp", 00:33:31.458 "traddr": "127.0.0.1", 00:33:31.458 "adrfam": "ipv4", 00:33:31.458 "trsvcid": "4420", 00:33:31.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.458 "prchk_reftag": false, 00:33:31.458 "prchk_guard": false, 00:33:31.458 "hdgst": false, 00:33:31.458 "ddgst": false, 00:33:31.458 "psk": "key1", 00:33:31.458 "method": "bdev_nvme_attach_controller", 00:33:31.458 "req_id": 1 00:33:31.458 } 00:33:31.458 Got JSON-RPC error response 00:33:31.458 response: 00:33:31.458 { 00:33:31.458 "code": -5, 00:33:31.458 "message": "Input/output error" 00:33:31.458 } 00:33:31.458 14:19:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:31.458 14:19:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.458 14:19:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.458 14:19:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.458 14:19:29 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:31.458 14:19:29 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:31.458 14:19:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.458 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.733 14:19:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:31.733 14:19:29 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:31.733 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:31.733 14:19:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:31.733 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:32.077 14:19:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:32.077 14:19:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:32.077 14:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.077 14:19:30 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:32.077 14:19:30 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.I3iUKjbqWa 00:33:32.077 14:19:30 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.077 14:19:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.077 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.362 [2024-07-15 14:19:30.238595] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.I3iUKjbqWa': 0100660 00:33:32.362 [2024-07-15 14:19:30.238620] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:32.362 request: 00:33:32.362 { 00:33:32.362 "name": "key0", 00:33:32.362 "path": "/tmp/tmp.I3iUKjbqWa", 00:33:32.362 "method": "keyring_file_add_key", 00:33:32.362 "req_id": 1 00:33:32.362 } 00:33:32.362 Got JSON-RPC error response 00:33:32.362 response: 00:33:32.362 { 00:33:32.362 "code": -1, 00:33:32.362 "message": "Operation not permitted" 00:33:32.362 } 00:33:32.362 14:19:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:32.362 14:19:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:32.362 14:19:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:32.362 14:19:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:32.362 14:19:30 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.I3iUKjbqWa 00:33:32.362 14:19:30 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I3iUKjbqWa 00:33:32.362 14:19:30 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.I3iUKjbqWa 00:33:32.362 14:19:30 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.362 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.622 14:19:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:32.622 14:19:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.622 14:19:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.622 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.622 [2024-07-15 14:19:30.731857] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.I3iUKjbqWa': No such file or directory 00:33:32.622 [2024-07-15 14:19:30.731875] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:32.622 [2024-07-15 14:19:30.731894] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:32.622 [2024-07-15 14:19:30.731899] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:32.622 [2024-07-15 14:19:30.731904] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:32.622 request: 00:33:32.622 { 00:33:32.622 "name": "nvme0", 00:33:32.622 "trtype": "tcp", 00:33:32.622 "traddr": "127.0.0.1", 00:33:32.622 "adrfam": "ipv4", 00:33:32.622 "trsvcid": "4420", 00:33:32.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.622 "prchk_reftag": false, 00:33:32.622 "prchk_guard": false, 00:33:32.622 "hdgst": false, 00:33:32.622 "ddgst": false, 00:33:32.622 "psk": "key0", 00:33:32.622 "method": "bdev_nvme_attach_controller", 00:33:32.622 "req_id": 1 00:33:32.622 } 00:33:32.622 Got JSON-RPC error response 00:33:32.622 response: 00:33:32.623 { 00:33:32.623 "code": -19, 00:33:32.623 "message": "No such device" 00:33:32.623 } 00:33:32.883 14:19:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:32.883 14:19:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:32.883 14:19:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:32.883 14:19:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:32.883 14:19:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:32.883 14:19:30 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VEEpVQuqoD 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:32.883 14:19:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VEEpVQuqoD 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VEEpVQuqoD 00:33:32.883 14:19:30 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.VEEpVQuqoD 00:33:32.883 14:19:30 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VEEpVQuqoD 00:33:32.883 14:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VEEpVQuqoD 00:33:33.144 14:19:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.144 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.404 nvme0n1 00:33:33.404 14:19:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.404 14:19:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:33.404 14:19:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:33.404 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:33.664 14:19:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:33.664 14:19:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:33.664 14:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.664 14:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.664 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.924 14:19:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:33.924 14:19:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.924 14:19:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:33.924 14:19:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:33.924 14:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:34.185 14:19:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:34.185 14:19:32 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:34.185 14:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.185 14:19:32 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:34.185 14:19:32 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VEEpVQuqoD 00:33:34.185 14:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VEEpVQuqoD 00:33:34.446 14:19:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9zoiKZHPQg 00:33:34.446 14:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9zoiKZHPQg 00:33:34.706 14:19:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.706 14:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.706 nvme0n1 00:33:34.706 14:19:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:34.706 14:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:34.967 14:19:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:34.967 "subsystems": [ 00:33:34.967 { 00:33:34.967 "subsystem": "keyring", 00:33:34.967 "config": [ 00:33:34.967 { 00:33:34.967 "method": "keyring_file_add_key", 00:33:34.967 "params": { 00:33:34.967 "name": "key0", 00:33:34.967 "path": "/tmp/tmp.VEEpVQuqoD" 00:33:34.967 } 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "method": "keyring_file_add_key", 00:33:34.967 "params": { 00:33:34.967 "name": "key1", 00:33:34.967 "path": "/tmp/tmp.9zoiKZHPQg" 00:33:34.967 } 00:33:34.967 } 00:33:34.967 ] 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "subsystem": "iobuf", 00:33:34.967 "config": [ 00:33:34.967 { 00:33:34.967 "method": "iobuf_set_options", 00:33:34.967 "params": { 00:33:34.967 "small_pool_count": 8192, 00:33:34.967 "large_pool_count": 1024, 00:33:34.967 "small_bufsize": 8192, 00:33:34.967 "large_bufsize": 135168 00:33:34.967 } 00:33:34.967 } 00:33:34.967 ] 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "subsystem": "sock", 00:33:34.967 "config": [ 00:33:34.967 { 00:33:34.967 "method": "sock_set_default_impl", 00:33:34.967 "params": { 00:33:34.967 "impl_name": "posix" 00:33:34.967 } 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "method": "sock_impl_set_options", 00:33:34.967 "params": { 00:33:34.967 "impl_name": "ssl", 00:33:34.967 "recv_buf_size": 4096, 00:33:34.967 "send_buf_size": 4096, 00:33:34.967 "enable_recv_pipe": true, 00:33:34.967 "enable_quickack": false, 00:33:34.967 "enable_placement_id": 0, 00:33:34.967 "enable_zerocopy_send_server": true, 00:33:34.967 "enable_zerocopy_send_client": false, 00:33:34.967 "zerocopy_threshold": 0, 00:33:34.967 "tls_version": 0, 00:33:34.967 "enable_ktls": false 00:33:34.967 } 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "method": "sock_impl_set_options", 00:33:34.967 "params": { 00:33:34.967 "impl_name": "posix", 00:33:34.967 "recv_buf_size": 2097152, 00:33:34.967 "send_buf_size": 2097152, 00:33:34.967 "enable_recv_pipe": true, 00:33:34.967 "enable_quickack": false, 00:33:34.967 "enable_placement_id": 0, 00:33:34.967 "enable_zerocopy_send_server": true, 00:33:34.967 "enable_zerocopy_send_client": false, 00:33:34.967 "zerocopy_threshold": 0, 00:33:34.967 "tls_version": 0, 00:33:34.967 "enable_ktls": false 00:33:34.967 } 00:33:34.967 } 00:33:34.967 ] 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "subsystem": "vmd", 00:33:34.967 "config": [] 00:33:34.967 }, 00:33:34.967 { 00:33:34.967 "subsystem": "accel", 00:33:34.967 "config": [ 00:33:34.967 { 00:33:34.967 "method": "accel_set_options", 00:33:34.967 "params": { 00:33:34.968 "small_cache_size": 128, 00:33:34.968 "large_cache_size": 16, 00:33:34.968 "task_count": 2048, 00:33:34.968 "sequence_count": 2048, 00:33:34.968 "buf_count": 2048 00:33:34.968 } 00:33:34.968 } 00:33:34.968 ] 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "subsystem": "bdev", 00:33:34.968 "config": [ 00:33:34.968 { 00:33:34.968 "method": "bdev_set_options", 00:33:34.968 "params": { 00:33:34.968 "bdev_io_pool_size": 65535, 00:33:34.968 "bdev_io_cache_size": 256, 00:33:34.968 "bdev_auto_examine": true, 00:33:34.968 "iobuf_small_cache_size": 128, 00:33:34.968 "iobuf_large_cache_size": 16 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_raid_set_options", 00:33:34.968 "params": { 00:33:34.968 "process_window_size_kb": 1024 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_iscsi_set_options", 00:33:34.968 "params": { 00:33:34.968 "timeout_sec": 30 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_nvme_set_options", 00:33:34.968 "params": { 00:33:34.968 "action_on_timeout": "none", 00:33:34.968 "timeout_us": 0, 00:33:34.968 "timeout_admin_us": 0, 00:33:34.968 "keep_alive_timeout_ms": 10000, 00:33:34.968 "arbitration_burst": 0, 00:33:34.968 "low_priority_weight": 0, 00:33:34.968 "medium_priority_weight": 0, 00:33:34.968 "high_priority_weight": 0, 00:33:34.968 "nvme_adminq_poll_period_us": 10000, 00:33:34.968 "nvme_ioq_poll_period_us": 0, 00:33:34.968 "io_queue_requests": 512, 00:33:34.968 "delay_cmd_submit": true, 00:33:34.968 "transport_retry_count": 4, 00:33:34.968 "bdev_retry_count": 3, 00:33:34.968 "transport_ack_timeout": 0, 00:33:34.968 "ctrlr_loss_timeout_sec": 0, 00:33:34.968 "reconnect_delay_sec": 0, 00:33:34.968 "fast_io_fail_timeout_sec": 0, 00:33:34.968 "disable_auto_failback": false, 00:33:34.968 "generate_uuids": false, 00:33:34.968 "transport_tos": 0, 00:33:34.968 "nvme_error_stat": false, 00:33:34.968 "rdma_srq_size": 0, 00:33:34.968 "io_path_stat": false, 00:33:34.968 "allow_accel_sequence": false, 00:33:34.968 "rdma_max_cq_size": 0, 00:33:34.968 "rdma_cm_event_timeout_ms": 0, 00:33:34.968 "dhchap_digests": [ 00:33:34.968 "sha256", 00:33:34.968 "sha384", 00:33:34.968 "sha512" 00:33:34.968 ], 00:33:34.968 "dhchap_dhgroups": [ 00:33:34.968 "null", 00:33:34.968 "ffdhe2048", 00:33:34.968 "ffdhe3072", 00:33:34.968 "ffdhe4096", 00:33:34.968 "ffdhe6144", 00:33:34.968 "ffdhe8192" 00:33:34.968 ] 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_nvme_attach_controller", 00:33:34.968 "params": { 00:33:34.968 "name": "nvme0", 00:33:34.968 "trtype": "TCP", 00:33:34.968 "adrfam": "IPv4", 00:33:34.968 "traddr": "127.0.0.1", 00:33:34.968 "trsvcid": "4420", 00:33:34.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.968 "prchk_reftag": false, 00:33:34.968 "prchk_guard": false, 00:33:34.968 "ctrlr_loss_timeout_sec": 0, 00:33:34.968 "reconnect_delay_sec": 0, 00:33:34.968 "fast_io_fail_timeout_sec": 0, 00:33:34.968 "psk": "key0", 00:33:34.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.968 "hdgst": false, 00:33:34.968 "ddgst": false 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_nvme_set_hotplug", 00:33:34.968 "params": { 00:33:34.968 "period_us": 100000, 00:33:34.968 "enable": false 00:33:34.968 } 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "method": "bdev_wait_for_examine" 00:33:34.968 } 00:33:34.968 ] 00:33:34.968 }, 00:33:34.968 { 00:33:34.968 "subsystem": "nbd", 00:33:34.968 "config": [] 00:33:34.968 } 00:33:34.968 ] 00:33:34.968 }' 00:33:34.968 14:19:33 keyring_file -- keyring/file.sh@114 -- # killprocess 1621903 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1621903 ']' 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1621903 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1621903 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1621903' 00:33:34.968 killing process with pid 1621903 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@967 -- # kill 1621903 00:33:34.968 Received shutdown signal, test time was about 1.000000 seconds 00:33:34.968 00:33:34.968 Latency(us) 00:33:34.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.968 =================================================================================================================== 00:33:34.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.968 14:19:33 keyring_file -- common/autotest_common.sh@972 -- # wait 1621903 00:33:35.240 14:19:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=1623412 00:33:35.240 14:19:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1623412 /var/tmp/bperf.sock 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1623412 ']' 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:35.240 14:19:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:35.240 14:19:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:35.240 14:19:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:35.240 "subsystems": [ 00:33:35.240 { 00:33:35.240 "subsystem": "keyring", 00:33:35.240 "config": [ 00:33:35.240 { 00:33:35.240 "method": "keyring_file_add_key", 00:33:35.240 "params": { 00:33:35.240 "name": "key0", 00:33:35.240 "path": "/tmp/tmp.VEEpVQuqoD" 00:33:35.240 } 00:33:35.240 }, 00:33:35.240 { 00:33:35.240 "method": "keyring_file_add_key", 00:33:35.240 "params": { 00:33:35.240 "name": "key1", 00:33:35.240 "path": "/tmp/tmp.9zoiKZHPQg" 00:33:35.240 } 00:33:35.240 } 00:33:35.240 ] 00:33:35.240 }, 00:33:35.240 { 00:33:35.241 "subsystem": "iobuf", 00:33:35.241 "config": [ 00:33:35.241 { 00:33:35.241 "method": "iobuf_set_options", 00:33:35.241 "params": { 00:33:35.241 "small_pool_count": 8192, 00:33:35.241 "large_pool_count": 1024, 00:33:35.241 "small_bufsize": 8192, 00:33:35.241 "large_bufsize": 135168 00:33:35.241 } 00:33:35.241 } 00:33:35.241 ] 00:33:35.241 }, 00:33:35.241 { 00:33:35.241 "subsystem": "sock", 00:33:35.241 "config": [ 00:33:35.241 { 00:33:35.241 "method": "sock_set_default_impl", 00:33:35.241 "params": { 00:33:35.241 "impl_name": "posix" 00:33:35.241 } 00:33:35.241 }, 00:33:35.241 { 00:33:35.241 "method": "sock_impl_set_options", 00:33:35.241 "params": { 00:33:35.241 "impl_name": "ssl", 00:33:35.241 "recv_buf_size": 4096, 00:33:35.241 "send_buf_size": 4096, 00:33:35.241 "enable_recv_pipe": true, 00:33:35.241 "enable_quickack": false, 00:33:35.241 "enable_placement_id": 0, 00:33:35.241 "enable_zerocopy_send_server": true, 00:33:35.241 "enable_zerocopy_send_client": false, 00:33:35.241 "zerocopy_threshold": 0, 00:33:35.241 "tls_version": 0, 00:33:35.241 "enable_ktls": false 00:33:35.241 } 00:33:35.241 }, 00:33:35.241 { 00:33:35.241 "method": "sock_impl_set_options", 00:33:35.241 "params": { 00:33:35.241 "impl_name": "posix", 00:33:35.241 "recv_buf_size": 2097152, 00:33:35.241 "send_buf_size": 2097152, 00:33:35.241 "enable_recv_pipe": true, 00:33:35.241 "enable_quickack": false, 00:33:35.241 "enable_placement_id": 0, 00:33:35.241 "enable_zerocopy_send_server": true, 00:33:35.241 "enable_zerocopy_send_client": false, 00:33:35.241 "zerocopy_threshold": 0, 00:33:35.241 "tls_version": 0, 00:33:35.241 "enable_ktls": false 00:33:35.241 } 00:33:35.241 } 00:33:35.242 ] 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "subsystem": "vmd", 00:33:35.242 "config": [] 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "subsystem": "accel", 00:33:35.242 "config": [ 00:33:35.242 { 00:33:35.242 "method": "accel_set_options", 00:33:35.242 "params": { 00:33:35.242 "small_cache_size": 128, 00:33:35.242 "large_cache_size": 16, 00:33:35.242 "task_count": 2048, 00:33:35.242 "sequence_count": 2048, 00:33:35.242 "buf_count": 2048 00:33:35.242 } 00:33:35.242 } 00:33:35.242 ] 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "subsystem": "bdev", 00:33:35.242 "config": [ 00:33:35.242 { 00:33:35.242 "method": "bdev_set_options", 00:33:35.242 "params": { 00:33:35.242 "bdev_io_pool_size": 65535, 00:33:35.242 "bdev_io_cache_size": 256, 00:33:35.242 "bdev_auto_examine": true, 00:33:35.242 "iobuf_small_cache_size": 128, 00:33:35.242 "iobuf_large_cache_size": 16 00:33:35.242 } 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "method": "bdev_raid_set_options", 00:33:35.242 "params": { 00:33:35.242 "process_window_size_kb": 1024 00:33:35.242 } 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "method": "bdev_iscsi_set_options", 00:33:35.242 "params": { 00:33:35.242 "timeout_sec": 30 00:33:35.242 } 00:33:35.242 }, 00:33:35.242 { 00:33:35.242 "method": "bdev_nvme_set_options", 00:33:35.242 "params": { 00:33:35.242 "action_on_timeout": "none", 00:33:35.242 "timeout_us": 0, 00:33:35.242 "timeout_admin_us": 0, 00:33:35.242 "keep_alive_timeout_ms": 10000, 00:33:35.242 "arbitration_burst": 0, 00:33:35.242 "low_priority_weight": 0, 00:33:35.242 "medium_priority_weight": 0, 00:33:35.242 "high_priority_weight": 0, 00:33:35.242 "nvme_adminq_poll_period_us": 10000, 00:33:35.242 "nvme_ioq_poll_period_us": 0, 00:33:35.242 "io_queue_requests": 512, 00:33:35.242 "delay_cmd_submit": true, 00:33:35.243 "transport_retry_count": 4, 00:33:35.243 "bdev_retry_count": 3, 00:33:35.243 "transport_ack_timeout": 0, 00:33:35.243 "ctrlr_loss_timeout_sec": 0, 00:33:35.243 "reconnect_delay_sec": 0, 00:33:35.243 "fast_io_fail_timeout_sec": 0, 00:33:35.243 "disable_auto_failback": false, 00:33:35.243 "generate_uuids": false, 00:33:35.243 "transport_tos": 0, 00:33:35.243 "nvme_error_stat": false, 00:33:35.243 "rdma_srq_size": 0, 00:33:35.243 "io_path_stat": false, 00:33:35.243 "allow_accel_sequence": false, 00:33:35.243 "rdma_max_cq_size": 0, 00:33:35.243 "rdma_cm_event_timeout_ms": 0, 00:33:35.243 "dhchap_digests": [ 00:33:35.243 "sha256", 00:33:35.243 "sha384", 00:33:35.243 "sha512" 00:33:35.243 ], 00:33:35.243 "dhchap_dhgroups": [ 00:33:35.243 "null", 00:33:35.243 "ffdhe2048", 00:33:35.243 "ffdhe3072", 00:33:35.243 "ffdhe4096", 00:33:35.243 "ffdhe6144", 00:33:35.243 "ffdhe8192" 00:33:35.243 ] 00:33:35.243 } 00:33:35.243 }, 00:33:35.243 { 00:33:35.243 "method": "bdev_nvme_attach_controller", 00:33:35.243 "params": { 00:33:35.243 "name": "nvme0", 00:33:35.243 "trtype": "TCP", 00:33:35.243 "adrfam": "IPv4", 00:33:35.243 "traddr": "127.0.0.1", 00:33:35.243 "trsvcid": "4420", 00:33:35.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.243 "prchk_reftag": false, 00:33:35.243 "prchk_guard": false, 00:33:35.243 "ctrlr_loss_timeout_sec": 0, 00:33:35.243 "reconnect_delay_sec": 0, 00:33:35.243 "fast_io_fail_timeout_sec": 0, 00:33:35.243 "psk": "key0", 00:33:35.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.243 "hdgst": false, 00:33:35.243 "ddgst": false 00:33:35.243 } 00:33:35.243 }, 00:33:35.243 { 00:33:35.243 "method": "bdev_nvme_set_hotplug", 00:33:35.243 "params": { 00:33:35.243 "period_us": 100000, 00:33:35.243 "enable": false 00:33:35.243 } 00:33:35.243 }, 00:33:35.243 { 00:33:35.243 "method": "bdev_wait_for_examine" 00:33:35.243 } 00:33:35.243 ] 00:33:35.243 }, 00:33:35.243 { 00:33:35.243 "subsystem": "nbd", 00:33:35.243 "config": [] 00:33:35.243 } 00:33:35.243 ] 00:33:35.243 }' 00:33:35.243 [2024-07-15 14:19:33.235485] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:35.243 [2024-07-15 14:19:33.235545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623412 ] 00:33:35.243 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.243 [2024-07-15 14:19:33.314614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.503 [2024-07-15 14:19:33.368368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.503 [2024-07-15 14:19:33.509594] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:36.072 14:19:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.072 14:19:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:36.072 14:19:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:36.072 14:19:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:36.072 14:19:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.072 14:19:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:36.072 14:19:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:36.072 14:19:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.072 14:19:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.072 14:19:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.072 14:19:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.072 14:19:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.332 14:19:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:36.332 14:19:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:36.332 14:19:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:36.332 14:19:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.332 14:19:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.332 14:19:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.332 14:19:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:36.592 14:19:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VEEpVQuqoD /tmp/tmp.9zoiKZHPQg 00:33:36.592 14:19:34 keyring_file -- keyring/file.sh@20 -- # killprocess 1623412 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1623412 ']' 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1623412 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623412 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623412' 00:33:36.592 killing process with pid 1623412 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@967 -- # kill 1623412 00:33:36.592 Received shutdown signal, test time was about 1.000000 seconds 00:33:36.592 00:33:36.592 Latency(us) 00:33:36.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.592 =================================================================================================================== 00:33:36.592 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:36.592 14:19:34 keyring_file -- common/autotest_common.sh@972 -- # wait 1623412 00:33:36.852 14:19:34 keyring_file -- keyring/file.sh@21 -- # killprocess 1621653 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1621653 ']' 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1621653 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1621653 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1621653' 00:33:36.853 killing process with pid 1621653 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@967 -- # kill 1621653 00:33:36.853 [2024-07-15 14:19:34.861268] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:36.853 14:19:34 keyring_file -- common/autotest_common.sh@972 -- # wait 1621653 00:33:37.113 00:33:37.113 real 0m10.932s 00:33:37.113 user 0m25.998s 00:33:37.113 sys 0m2.556s 00:33:37.113 14:19:35 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.113 14:19:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 ************************************ 00:33:37.113 END TEST keyring_file 00:33:37.113 ************************************ 00:33:37.113 14:19:35 -- common/autotest_common.sh@1142 -- # return 0 00:33:37.113 14:19:35 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:37.113 14:19:35 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:37.113 14:19:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:37.113 14:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.113 14:19:35 -- common/autotest_common.sh@10 -- # set +x 00:33:37.113 ************************************ 00:33:37.113 START TEST keyring_linux 00:33:37.113 ************************************ 00:33:37.113 14:19:35 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:37.374 * Looking for test storage... 00:33:37.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:37.374 14:19:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:37.374 14:19:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.374 14:19:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.375 14:19:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.375 14:19:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.375 14:19:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.375 14:19:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.375 14:19:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.375 14:19:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.375 14:19:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:37.375 14:19:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:37.375 /tmp/:spdk-test:key0 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:37.375 14:19:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:37.375 14:19:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:37.375 /tmp/:spdk-test:key1 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1623995 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1623995 00:33:37.375 14:19:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1623995 ']' 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.375 14:19:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:37.375 [2024-07-15 14:19:35.430448] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:37.375 [2024-07-15 14:19:35.430527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623995 ] 00:33:37.375 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.635 [2024-07-15 14:19:35.501962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.635 [2024-07-15 14:19:35.576680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:38.205 [2024-07-15 14:19:36.193206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.205 null0 00:33:38.205 [2024-07-15 14:19:36.225252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:38.205 [2024-07-15 14:19:36.225639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:38.205 793962441 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:38.205 240277068 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1624148 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1624148 /var/tmp/bperf.sock 00:33:38.205 14:19:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1624148 ']' 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.205 14:19:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:38.205 [2024-07-15 14:19:36.299941] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:38.205 [2024-07-15 14:19:36.299987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624148 ] 00:33:38.466 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.466 [2024-07-15 14:19:36.381586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.466 [2024-07-15 14:19:36.435088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.037 14:19:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.038 14:19:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:39.038 14:19:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:39.038 14:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:39.298 14:19:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:39.298 14:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:39.559 14:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:39.559 [2024-07-15 14:19:37.549398] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:39.559 nvme0n1 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:39.559 14:19:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:39.559 14:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.819 14:19:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:39.819 14:19:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:39.819 14:19:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:39.819 14:19:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:39.819 14:19:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.819 14:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.819 14:19:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@25 -- # sn=793962441 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 793962441 == \7\9\3\9\6\2\4\4\1 ]] 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 793962441 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:40.078 14:19:37 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.078 Running I/O for 1 seconds... 00:33:41.019 00:33:41.019 Latency(us) 00:33:41.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:41.019 nvme0n1 : 1.01 13544.89 52.91 0.00 0.00 9405.45 2116.27 10321.92 00:33:41.019 =================================================================================================================== 00:33:41.019 Total : 13544.89 52.91 0.00 0.00 9405.45 2116.27 10321.92 00:33:41.019 0 00:33:41.019 14:19:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:41.019 14:19:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:41.280 14:19:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:41.280 14:19:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:41.280 14:19:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:41.280 14:19:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:41.280 14:19:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:41.281 14:19:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.541 14:19:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:41.541 14:19:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:41.541 14:19:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 14:19:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 [2024-07-15 14:19:39.556034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:41.542 [2024-07-15 14:19:39.556415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74000 (107): Transport endpoint is not connected 00:33:41.542 [2024-07-15 14:19:39.557412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74000 (9): Bad file descriptor 00:33:41.542 [2024-07-15 14:19:39.558414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:41.542 [2024-07-15 14:19:39.558421] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:41.542 [2024-07-15 14:19:39.558427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:41.542 request: 00:33:41.542 { 00:33:41.542 "name": "nvme0", 00:33:41.542 "trtype": "tcp", 00:33:41.542 "traddr": "127.0.0.1", 00:33:41.542 "adrfam": "ipv4", 00:33:41.542 "trsvcid": "4420", 00:33:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.542 "prchk_reftag": false, 00:33:41.542 "prchk_guard": false, 00:33:41.542 "hdgst": false, 00:33:41.542 "ddgst": false, 00:33:41.542 "psk": ":spdk-test:key1", 00:33:41.542 "method": "bdev_nvme_attach_controller", 00:33:41.542 "req_id": 1 00:33:41.542 } 00:33:41.542 Got JSON-RPC error response 00:33:41.542 response: 00:33:41.542 { 00:33:41.542 "code": -5, 00:33:41.542 "message": "Input/output error" 00:33:41.542 } 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@33 -- # sn=793962441 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 793962441 00:33:41.542 1 links removed 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@33 -- # sn=240277068 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 240277068 00:33:41.542 1 links removed 00:33:41.542 14:19:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1624148 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1624148 ']' 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1624148 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1624148 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1624148' 00:33:41.542 killing process with pid 1624148 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 1624148 00:33:41.542 Received shutdown signal, test time was about 1.000000 seconds 00:33:41.542 00:33:41.542 Latency(us) 00:33:41.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.542 =================================================================================================================== 00:33:41.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.542 14:19:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 1624148 00:33:41.803 14:19:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1623995 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1623995 ']' 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1623995 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623995 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623995' 00:33:41.803 killing process with pid 1623995 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 1623995 00:33:41.803 14:19:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 1623995 00:33:42.064 00:33:42.064 real 0m4.865s 00:33:42.064 user 0m8.650s 00:33:42.064 sys 0m1.432s 00:33:42.064 14:19:40 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:42.064 14:19:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.064 ************************************ 00:33:42.064 END TEST keyring_linux 00:33:42.064 ************************************ 00:33:42.064 14:19:40 -- common/autotest_common.sh@1142 -- # return 0 00:33:42.064 14:19:40 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:42.064 14:19:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:42.064 14:19:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:42.064 14:19:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:42.064 14:19:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:42.064 14:19:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:42.064 14:19:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:42.064 14:19:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:42.064 14:19:40 -- common/autotest_common.sh@10 -- # set +x 00:33:42.064 14:19:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:42.064 14:19:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:42.064 14:19:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:42.064 14:19:40 -- common/autotest_common.sh@10 -- # set +x 00:33:50.197 INFO: APP EXITING 00:33:50.197 INFO: killing all VMs 00:33:50.197 INFO: killing vhost app 00:33:50.197 INFO: EXIT DONE 00:33:53.491 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:53.491 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:53.491 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:57.692 Cleaning 00:33:57.693 Removing: /var/run/dpdk/spdk0/config 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:57.693 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:57.693 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:57.693 Removing: /var/run/dpdk/spdk1/config 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:57.693 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:57.693 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:57.693 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:57.693 Removing: /var/run/dpdk/spdk2/config 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:57.693 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:57.693 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:57.693 Removing: /var/run/dpdk/spdk3/config 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:57.693 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:57.693 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:57.693 Removing: /var/run/dpdk/spdk4/config 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:57.693 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:57.693 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:57.693 Removing: /dev/shm/bdev_svc_trace.1 00:33:57.693 Removing: /dev/shm/nvmf_trace.0 00:33:57.693 Removing: /dev/shm/spdk_tgt_trace.pid1140186 00:33:57.693 Removing: /var/run/dpdk/spdk0 00:33:57.693 Removing: /var/run/dpdk/spdk1 00:33:57.693 Removing: /var/run/dpdk/spdk2 00:33:57.693 Removing: /var/run/dpdk/spdk3 00:33:57.693 Removing: /var/run/dpdk/spdk4 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1138577 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1140186 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1140736 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1141842 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1142113 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1143287 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1143504 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1143800 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1144765 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1145536 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1145921 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1146203 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1146499 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1146786 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1147142 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1147492 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1147797 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1148942 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1152231 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1152585 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1152933 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1153260 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1153639 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1153772 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1154347 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1154359 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1154723 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1154915 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1155100 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1155343 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1155862 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1156073 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1156330 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1156661 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1156725 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1157069 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1157277 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1157481 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1157809 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1158164 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1158513 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1158793 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1158984 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1159252 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1159601 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1159954 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1160267 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1160457 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1160706 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1161056 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1161409 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1161750 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1161946 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1162169 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1162503 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1162853 00:33:57.693 Removing: /var/run/dpdk/spdk_pid1162931 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1163335 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1168387 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1225754 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1231447 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1244387 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1251440 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1256802 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1257498 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1265166 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1272927 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1272929 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1273948 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1274960 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1276017 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1276652 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1276814 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1277037 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1277285 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1277287 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1278293 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1279301 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1280307 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1280983 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1280990 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1281322 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1282750 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1284232 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1295257 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1295719 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1301152 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1308544 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1311621 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1324823 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1336538 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1338659 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1339765 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1362128 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1367300 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1398993 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1404912 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1406834 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1408938 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1409274 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1409462 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1409632 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1410346 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1412363 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1413444 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1413918 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1416519 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1417222 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1418017 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1423529 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1436690 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1441940 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1449563 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1451055 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1452611 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1458338 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1463721 00:33:57.954 Removing: /var/run/dpdk/spdk_pid1473529 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1473660 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1479229 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1479558 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1479769 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1480234 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1480239 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1486285 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1486813 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1492641 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1496499 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1503277 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1510450 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1520759 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1529797 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1529832 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1554083 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1554839 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1555696 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1556443 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1557486 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1558185 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1558867 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1559558 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1565275 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1565565 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1573178 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1573383 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1576172 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1583689 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1583737 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1590337 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1592651 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1595046 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1596422 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1598866 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1600841 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1611386 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1612036 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1612708 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1615767 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1616255 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1616788 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1621653 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1621903 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1623412 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1623995 00:33:58.214 Removing: /var/run/dpdk/spdk_pid1624148 00:33:58.214 Clean 00:33:58.214 14:19:56 -- common/autotest_common.sh@1451 -- # return 0 00:33:58.214 14:19:56 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:58.214 14:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:58.214 14:19:56 -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 14:19:56 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:58.474 14:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:58.474 14:19:56 -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 14:19:56 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:58.475 14:19:56 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:58.475 14:19:56 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:58.475 14:19:56 -- spdk/autotest.sh@391 -- # hash lcov 00:33:58.475 14:19:56 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:58.475 14:19:56 -- spdk/autotest.sh@393 -- # hostname 00:33:58.475 14:19:56 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:58.735 geninfo: WARNING: invalid characters removed from testname! 00:34:25.375 14:20:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.635 14:20:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:27.543 14:20:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:28.925 14:20:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.836 14:20:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.217 14:20:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:33.601 14:20:31 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:33.601 14:20:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.601 14:20:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:33.601 14:20:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.601 14:20:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.601 14:20:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.601 14:20:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.601 14:20:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.601 14:20:31 -- paths/export.sh@5 -- $ export PATH 00:34:33.601 14:20:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.601 14:20:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:33.601 14:20:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:33.601 14:20:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721046031.XXXXXX 00:34:33.601 14:20:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721046031.3C3zed 00:34:33.601 14:20:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:33.601 14:20:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:33.601 14:20:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:33.601 14:20:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:33.601 14:20:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:33.601 14:20:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:33.601 14:20:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:33.601 14:20:31 -- common/autotest_common.sh@10 -- $ set +x 00:34:33.861 14:20:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:33.861 14:20:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:33.861 14:20:31 -- pm/common@17 -- $ local monitor 00:34:33.861 14:20:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.861 14:20:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.861 14:20:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.861 14:20:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.861 14:20:31 -- pm/common@21 -- $ date +%s 00:34:33.861 14:20:31 -- pm/common@25 -- $ sleep 1 00:34:33.861 14:20:31 -- pm/common@21 -- $ date +%s 00:34:33.861 14:20:31 -- pm/common@21 -- $ date +%s 00:34:33.861 14:20:31 -- pm/common@21 -- $ date +%s 00:34:33.861 14:20:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721046031 00:34:33.861 14:20:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721046031 00:34:33.862 14:20:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721046031 00:34:33.862 14:20:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721046031 00:34:33.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721046031_collect-vmstat.pm.log 00:34:33.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721046031_collect-cpu-load.pm.log 00:34:33.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721046031_collect-cpu-temp.pm.log 00:34:33.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721046031_collect-bmc-pm.bmc.pm.log 00:34:34.803 14:20:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:34.803 14:20:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:34.803 14:20:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:34.803 14:20:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:34.803 14:20:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:34.803 14:20:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:34.803 14:20:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:34.803 14:20:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:34.803 14:20:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:34.803 14:20:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:34.803 14:20:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:34.803 14:20:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:34.803 14:20:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:34.803 14:20:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.803 14:20:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:34.803 14:20:32 -- pm/common@44 -- $ pid=1636910 00:34:34.803 14:20:32 -- pm/common@50 -- $ kill -TERM 1636910 00:34:34.803 14:20:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.803 14:20:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:34.803 14:20:32 -- pm/common@44 -- $ pid=1636911 00:34:34.803 14:20:32 -- pm/common@50 -- $ kill -TERM 1636911 00:34:34.803 14:20:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.803 14:20:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:34.803 14:20:32 -- pm/common@44 -- $ pid=1636913 00:34:34.803 14:20:32 -- pm/common@50 -- $ kill -TERM 1636913 00:34:34.803 14:20:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.803 14:20:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:34.803 14:20:32 -- pm/common@44 -- $ pid=1636936 00:34:34.803 14:20:32 -- pm/common@50 -- $ sudo -E kill -TERM 1636936 00:34:34.803 + [[ -n 1014017 ]] 00:34:34.803 + sudo kill 1014017 00:34:34.816 [Pipeline] } 00:34:34.839 [Pipeline] // stage 00:34:34.847 [Pipeline] } 00:34:34.869 [Pipeline] // timeout 00:34:34.875 [Pipeline] } 00:34:34.896 [Pipeline] // catchError 00:34:34.904 [Pipeline] } 00:34:34.925 [Pipeline] // wrap 00:34:34.934 [Pipeline] } 00:34:34.951 [Pipeline] // catchError 00:34:34.962 [Pipeline] stage 00:34:34.964 [Pipeline] { (Epilogue) 00:34:34.979 [Pipeline] catchError 00:34:34.981 [Pipeline] { 00:34:34.996 [Pipeline] echo 00:34:34.998 Cleanup processes 00:34:35.005 [Pipeline] sh 00:34:35.304 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.304 1637017 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:35.304 1637457 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.318 [Pipeline] sh 00:34:35.599 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.599 ++ grep -v 'sudo pgrep' 00:34:35.599 ++ awk '{print $1}' 00:34:35.599 + sudo kill -9 1637017 00:34:35.610 [Pipeline] sh 00:34:35.889 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:48.123 [Pipeline] sh 00:34:48.411 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:48.412 Artifacts sizes are good 00:34:48.428 [Pipeline] archiveArtifacts 00:34:48.436 Archiving artifacts 00:34:48.634 [Pipeline] sh 00:34:48.947 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:48.964 [Pipeline] cleanWs 00:34:48.976 [WS-CLEANUP] Deleting project workspace... 00:34:48.976 [WS-CLEANUP] Deferred wipeout is used... 00:34:48.983 [WS-CLEANUP] done 00:34:48.985 [Pipeline] } 00:34:49.007 [Pipeline] // catchError 00:34:49.020 [Pipeline] sh 00:34:49.308 + logger -p user.info -t JENKINS-CI 00:34:49.319 [Pipeline] } 00:34:49.339 [Pipeline] // stage 00:34:49.347 [Pipeline] } 00:34:49.365 [Pipeline] // node 00:34:49.371 [Pipeline] End of Pipeline 00:34:49.406 Finished: SUCCESS